From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43A98EE7FF4 for ; Mon, 11 Sep 2023 16:44:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9D1876B02B0; Mon, 11 Sep 2023 12:44:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 981FA6B02B3; Mon, 11 Sep 2023 12:44:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 84A376B02B5; Mon, 11 Sep 2023 12:44:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 70FF86B02B0 for ; Mon, 11 Sep 2023 12:44:18 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 40BF8120AC9 for ; Mon, 11 Sep 2023 16:44:18 +0000 (UTC) X-FDA: 81224889396.05.7777D9B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 37D0CC001F for ; Mon, 11 Sep 2023 16:44:14 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SFVp0Mdn; dmarc=none; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694450656; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2N6m3edx41I1Oh1oh7HMzFAU2bfARAGawMSwEvl7fi4=; b=8hxPmjPRdV73GMgS/WrsQ/IxK+yKMmVjyplYXIIC4uy6vQPzbrk0tTNZbMDRGoRerLymH6 QWZMnCG3gD7UDvINQB6zXHNM3ZBI/lA/kNnFCrKQWRayXFpU3n7O/SEKJTCfjGj1COQEgL vwaUWoAWodb8UWTuhqEzPO0hXeAEFac= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SFVp0Mdn; dmarc=none; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694450656; a=rsa-sha256; cv=none; b=pShUV1WdXvi130bq1IEONaMH9pjoH0972xfFZiOeN27XL2O0CODgvWbAYlo+ej+8MCz2Vq FBx4ISSvN9o0kgygxsNwlNnQ/8UMduufG/RAPnpVpPyMqmFG3sFU9Ewk6LWho8gOFVQg8R ML9735S/LDR07go/5aUot2A2X/bA6Zs= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=2N6m3edx41I1Oh1oh7HMzFAU2bfARAGawMSwEvl7fi4=; b=SFVp0Mdno32el6ma82vBvL/xlC lhnIZ2+Trvr6TOiYjp1PgmFfEjR/KpLm1Nc5//pAj1Pa/3d6aLp6SlkqfyBkRuraON7I2PgJEzJsX PmYUFcG629//wnDlS11P/Yn9B2Uy33SX556njw9p0t9gn+NUY1kPqEc/k5JkgTgsxNVst4UIhgdSQ 533JXN+cJVvwjxLel/vgmAxge9u0LIxEkThaJ9rq2uvewv5jfirMu74cheMbZGyfMj2SqK+LRcXba RBGwa8rRKYSnju4txdboo/tcAqpdNiOJs8sF8CFF7/L9P+fGd5H39rFvNItxVFyZ3UsqDgUf93QfV ZtQPnP9Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qfk14-00212a-Jm; Mon, 11 Sep 2023 16:44:02 +0000 Date: Mon, 11 Sep 2023 17:44:02 +0100 From: Matthew Wilcox To: Dave Hansen Cc: Yin Fengwei , syzbot , akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, syzkaller-bugs@googlegroups.com Subject: Re: [syzbot] [mm?] BUG: Bad page map (7) Message-ID: References: <000000000000d099fa0604f03351@google.com> <0465d13d-83b6-163d-438d-065d03e9ba76@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 37D0CC001F X-Stat-Signature: rhjsf9rcdk6q7qbmqfmb1rik86rah4fp X-HE-Tag: 1694450654-216792 X-HE-Meta: U2FsdGVkX1+ks6Fw68JCEJWG3xlc45XAgstqZu7XeyuQFR/B66+plURh+dtmgzJeA+2dGq2aFKTOtfGDrf16vAC9l587Uxwf2av81lnWAPMao+3fDAVfAUZPNy72qDpR/EByJ2sLq5ZlQ/Zj9f+Kgl8Z+TxdvFeSq/7gbStHADw4S0V+yQDXqZODO4lK/3Co+A3+qpO+jkm+gXruaaSRpz9G/t6gJ7ILZ4ODTeuAinGendEElvVeUc4naji7fWhoNYOv1lLubo3GE32NXCqDpZ5W8yoSGBzEZjHc2gKZYCb4Cmmcx8u5iJLl/qrswT0Gz1K+qnk8uj1EeMupiiLtJJmntqJ0729oPSg9m5WNiRy1Idexhutu8YhDLc7oMfyLHEN6WXzlutJdfsAJ0HiC8uGo37BxjqNE4P+47F9Pf64pec0ew57siHtYnBu66BIgDXp1gkHwU1vhDfkjj5CDcaa5zH9V1TLSpJsDPM60/sfou2vV9Ja/wfLnvcTJBJxQ1zHY2vk5cNEiv8PcKsGK4rKdk+hVtn7P1O0uVMEXELXURbjHfuVp3HoAbV+Guj32D2WmdFoA2yXcgIdBfge738WI4FjMZ0UF2RXHBGIy9JJEM1mVzxAH222SskGdFK5wcGVhjSuxQV7kkhmUOo41crvugCrQxJ2JWgQUZK0KyRHGvW6RJdz1/MgaUIcw26zNyCFWX7uS97GU1oEF3upcIE6zTpxXmPBwSv9jaKTcu+LjU5n3+3EMF7uEIwHMrVKmlJtApj2dMS2YfcJPXFEzW8NInpNCdg66o52vdIXxGumeauIoFLzXSjZfeks5bdC6zzdniofkMq1OMsj5e6bl+u+ubsB0ov47pjOVJ3KtiEG3hk7F4ajKuLn13JoscIUZFqZap2WzuXDbDJ5qARhSb6s5Dcs1UBG3VTjdI1wKJS9A3eeb8VBx71v/QpXDqiKFEXV9dBb6cmmen/o902Y q7fMAtET I0HS0kcO2sJSUo4D+vVbZIGhrMScFdi7FdG4rY8WUC+W/5cyh22yz/yb6vm5xU4+B+ico27fV0I56UoxtgPNORRNEC68b1u6Sh3pECvg1CipMBuBUpxmUP7vwCAJ75y+jXVaicNeBj1MYe/MrdEJgH5TAwevrbVIY0gpfMTJTuHPMC3r5W/wwd5nygyJnhbLmd85nbxqFt2TgJVsCNRFSXLvDnabXmq5NUQ9imDd4p1HosTkYo+pWylor5oPmWtVdzVWCsP/MxZH9JbiK2lj80sMRpw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Sep 11, 2023 at 08:34:57AM -0700, Dave Hansen wrote: > On 9/11/23 06:26, Matthew Wilcox wrote: > > @@ -231,7 +235,10 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, > > if (--nr == 0) > > break; > > ptep++; > > - pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); > > + if (__pte_needs_invert(pte_val(pte))) > > + pte = __pte(pte_val(pte) - (1UL << PFN_PTE_SHIFT)); > > + else > > + pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); > > } > > arch_leave_lazy_mmu_mode(); > > } > > This is much better than a whole x86 fork of set_ptes(). But it's still > a bit wonky because it exposes the PTE inversion logic to generic code. I saw that as an advantage ... let people know that it exists as a concept. > static inline void set_ptes(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, pte_t pte, unsigned int nr) > { > pgprot_t prot = pte_pgprot(x); > unsigned long pfn = pte_pfn(pte); > > page_table_check_ptes_set(mm, ptep, pte, nr); > > arch_enter_lazy_mmu_mode(); > for (;;) { > set_pte(ptep, pte); > if (--nr == 0) > break; > ptep++; > pfn++; > pte = pfn_pte(pfn, pgprot); > } > arch_leave_lazy_mmu_mode(); > } > > Obviously completely untested. :) After fixing your two typos, this assembles to 176 bytes more code than my version. Not sure that's great. How about this? Keeps the inverted knowledge entirely in arch/x86. Compiles to exactly the same code as the version I sent earlier. diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index d6ad98ca1288..c9781b8b14af 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -955,6 +955,14 @@ static inline int pte_same(pte_t a, pte_t b) return a.pte == b.pte; } +static inline pte_t pte_next(pte_t pte) +{ + if (__pte_needs_invert(pte_val(pte))) + return __pte(pte_val(pte) - (1UL << PFN_PTE_SHIFT)); + return __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); +} +#define pte_next pte_next + static inline int pte_present(pte_t a) { return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE); diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 1fba072b3dac..7a932ed59c27 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -205,6 +205,10 @@ static inline int pmd_young(pmd_t pmd) #define arch_flush_lazy_mmu_mode() do {} while (0) #endif +#ifndef pte_next +#define pte_next(pte) ((pte) + (1UL << PFN_PTE_SHIFT)) +#endif + #ifndef set_ptes /** * set_ptes - Map consecutive pages to a contiguous range of addresses. @@ -231,7 +235,7 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, if (--nr == 0) break; ptep++; - pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); + pte = pte_next(pte); } arch_leave_lazy_mmu_mode(); }