From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2505C43478 for ; Tue, 14 Jul 2020 07:04:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 90F9A221FD for ; Tue, 14 Jul 2020 07:04:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 90F9A221FD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0FCD68D000F; Tue, 14 Jul 2020 03:04:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 087028D000A; Tue, 14 Jul 2020 03:04:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E48C48D0010; Tue, 14 Jul 2020 03:04:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0253.hostedemail.com [216.40.44.253]) by kanga.kvack.org (Postfix) with ESMTP id CAFB18D000F for ; Tue, 14 Jul 2020 03:04:24 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8A6E8180AD802 for ; Tue, 14 Jul 2020 07:04:24 +0000 (UTC) X-FDA: 77035792848.26.coil22_3d1507726eef Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 667B11804B669 for ; Tue, 14 Jul 2020 07:04:24 +0000 (UTC) X-HE-Tag: coil22_3d1507726eef X-Filterd-Recvd-Size: 5916 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Jul 2020 07:04:23 +0000 (UTC) IronPort-SDR: T24rDfkHM+/KbzKjmFW6Y3Fq+9+d90990cYiNdO822Zfqyd42I6Y8r5pXmbj6PEPopoK65Tn28 63A2jXaYJGdA== X-IronPort-AV: E=McAfee;i="6000,8403,9681"; a="150261204" X-IronPort-AV: E=Sophos;i="5.75,350,1589266800"; d="scan'208";a="150261204" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 00:04:22 -0700 IronPort-SDR: i0LqExrBzUtJkU9eqNM0y424kEtZ8Ql24Gg0EKNp03C+/z+emnIMY4v+O6oNf5WcrrVRGG2n8D +66KNsFjR4FQ== X-IronPort-AV: E=Sophos;i="5.75,350,1589266800"; d="scan'208";a="360295775" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 00:04:21 -0700 From: ira.weiny@intel.com To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Vishal Verma , Andrew Morton , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [RFC PATCH 12/15] kmap: Add stray write protection for device pages Date: Tue, 14 Jul 2020 00:02:17 -0700 Message-Id: <20200714070220.3500839-13-ira.weiny@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200714070220.3500839-1-ira.weiny@intel.com> References: <20200714070220.3500839-1-ira.weiny@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 667B11804B669 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ira Weiny Device managed pages may have additional protections. These protections need to be removed prior to valid use by kernel users. Check for special treatment of device managed pages in kmap and take action if needed. We use kmap as an interface for generic kernel code because under normal circumstances it would be a bug for general kernel code to not use kmap prior to accessing kernel memory. Therefore, this should allow any valid kernel users to seamlessly use these pages without issues. Signed-off-by: Ira Weiny --- include/linux/highmem.h | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index d6e82e3de027..7f809d8d5a94 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -8,6 +8,7 @@ #include #include #include +#include =20 #include =20 @@ -31,6 +32,20 @@ static inline void invalidate_kernel_vmap_range(void *= vaddr, int size) =20 #include =20 +static inline void enable_access(struct page *page) +{ + if (!page_is_access_protected(page)) + return; + dev_access_enable(); +} + +static inline void disable_access(struct page *page) +{ + if (!page_is_access_protected(page)) + return; + dev_access_disable(); +} + #ifdef CONFIG_HIGHMEM extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot); extern void kunmap_atomic_high(void *kvaddr); @@ -55,6 +70,11 @@ static inline void *kmap(struct page *page) else addr =3D kmap_high(page); kmap_flush_tlb((unsigned long)addr); + /* + * Even non-highmem pages may have additional access protections which + * need to be checked and potentially enabled. + */ + enable_access(page); return addr; } =20 @@ -63,6 +83,11 @@ void kunmap_high(struct page *page); static inline void kunmap(struct page *page) { might_sleep(); + /* + * Even non-highmem pages may have additional access protections which + * need to be checked and potentially disabled. + */ + disable_access(page); if (!PageHighMem(page)) return; kunmap_high(page); @@ -85,6 +110,7 @@ static inline void *kmap_atomic_prot(struct page *page= , pgprot_t prot) { preempt_disable(); pagefault_disable(); + enable_access(page); if (!PageHighMem(page)) return page_address(page); return kmap_atomic_high_prot(page, prot); @@ -137,6 +163,7 @@ static inline unsigned long totalhigh_pages(void) { r= eturn 0UL; } static inline void *kmap(struct page *page) { might_sleep(); + enable_access(page); return page_address(page); } =20 @@ -146,6 +173,7 @@ static inline void kunmap_high(struct page *page) =20 static inline void kunmap(struct page *page) { + disable_access(page); #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(page_address(page)); #endif @@ -155,6 +183,7 @@ static inline void *kmap_atomic(struct page *page) { preempt_disable(); pagefault_disable(); + enable_access(page); return page_address(page); } #define kmap_atomic_prot(page, prot) kmap_atomic(page) @@ -216,7 +245,8 @@ static inline void kmap_atomic_idx_pop(void) #define kunmap_atomic(addr) \ do { \ BUILD_BUG_ON(__same_type((addr), struct page *)); \ - kunmap_atomic_high(addr); \ + disable_access(kmap_to_page(addr)); \ + kunmap_atomic_high(addr); \ pagefault_enable(); \ preempt_enable(); \ } while (0) --=20 2.25.1