From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A78E9C43603 for ; Fri, 20 Dec 2019 08:24:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 483E3227BF for ; Fri, 20 Dec 2019 08:24:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 483E3227BF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A95458E0192; Fri, 20 Dec 2019 03:24:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A1D3A8E0184; Fri, 20 Dec 2019 03:24:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E6008E0192; Fri, 20 Dec 2019 03:24:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0107.hostedemail.com [216.40.44.107]) by kanga.kvack.org (Postfix) with ESMTP id 73F668E0184 for ; Fri, 20 Dec 2019 03:24:03 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 26CED52BC for ; Fri, 20 Dec 2019 08:24:03 +0000 (UTC) X-FDA: 76284831966.22.cord90_234b137abf747 X-HE-Tag: cord90_234b137abf747 X-Filterd-Recvd-Size: 8813 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Dec 2019 08:24:02 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id q6so8491662wro.9 for ; Fri, 20 Dec 2019 00:24:02 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=16kleYj3EzWpMaA9Rqqv7rsiyiD6OUag5CBmilPxVWQ=; b=WDJtqiesAIx8DholzSj5hzoxR+vvrxv7mKiH3CaeK716WXzHmRYjISdjx5tgkutJuS FPlcYdqU8XdNMXl2ndoRbbKkoKAt8Ku01v46qdYbheBM46HF85knbhpXz2s9AWYwI9mv sGAIML+uG58w+0KW8lMpfYqZsysAp1Ob9wD9lGaX7osndq9/5QO/Ugpnae0SCgkWyCpg YBRpjUcDjg5/x5waHyQq+PcLL9/4fboZxIUFY6Tq4onlBcmh6WO/r+7/SlUalqCz+KQ8 lczhvVOPCw6O1KFerAfAjD6pDwTbOcz+x8t76bieSQ8XsRZr2T/Hj9xhvzYK5O3t1OW9 JPfg== X-Gm-Message-State: APjAAAX7gsdJCeHPfdmDy+EMiriV91dvbcTyenQCmUktWqNjFmOa8c1e r/nsnTq3idF+gz1NZa8zHBA= X-Google-Smtp-Source: APXvYqwj2I+JPingsGXkMNL2ZaL9e1PCB9+FBAHMeSCLK9AHTTFkQAjwf7ZwwRriPeGiK/PwXbKoQw== X-Received: by 2002:a5d:528e:: with SMTP id c14mr14443874wrv.308.1576830241291; Fri, 20 Dec 2019 00:24:01 -0800 (PST) Received: from localhost (prg-ext-pat.suse.com. [213.151.95.130]) by smtp.gmail.com with ESMTPSA id v62sm8994041wmg.3.2019.12.20.00.24.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Dec 2019 00:24:00 -0800 (PST) Date: Fri, 20 Dec 2019 09:23:59 +0100 From: Michal Hocko To: Thomas =?iso-8859-1?Q?Hellstr=F6m_=28VMware=29?= Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, pv-drivers@vmware.com, linux-graphics-maintainer@vmware.com, Thomas Hellstrom , Andrew Morton , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Ralph Campbell , =?iso-8859-1?B?Suly9G1l?= Glisse , Christian =?iso-8859-1?Q?K=F6nig?= Subject: Re: [PATCH v4 1/2] mm: Add a vmf_insert_mixed_prot() function Message-ID: <20191220082359.GD20332@dhcp22.suse.cz> References: <20191212084741.9251-1-thomas_os@shipmail.org> <20191212084741.9251-2-thomas_os@shipmail.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20191212084741.9251-2-thomas_os@shipmail.org> User-Agent: Mutt/1.12.2 (2019-09-21) Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu 12-12-19 09:47:40, Thomas Hellstr=F6m (VMware) wrote: > From: Thomas Hellstrom >=20 > The TTM module today uses a hack to be able to set a different page > protection than struct vm_area_struct::vm_page_prot. To be able to do > this properly, add the needed vm functionality as vmf_insert_mixed_prot= (). >=20 > Cc: Andrew Morton > Cc: Michal Hocko > Cc: "Matthew Wilcox (Oracle)" > Cc: "Kirill A. Shutemov" > Cc: Ralph Campbell > Cc: "J=E9r=F4me Glisse" > Cc: "Christian K=F6nig" > Signed-off-by: Thomas Hellstrom > Acked-by: Christian K=F6nig I cannot say I am happy about this because it adds a discrepancy and that is always tricky but I do agree that a formalized discrepancy is better than ad-hoc hacks so Acked-by: Michal Hocko Thanks for extending the documentation. > --- > include/linux/mm.h | 2 ++ > include/linux/mm_types.h | 7 ++++++- > mm/memory.c | 43 ++++++++++++++++++++++++++++++++++++---- > 3 files changed, 47 insertions(+), 5 deletions(-) >=20 > diff --git a/include/linux/mm.h b/include/linux/mm.h > index cc292273e6ba..29575d3c1e47 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2548,6 +2548,8 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_str= uct *vma, unsigned long addr, > unsigned long pfn, pgprot_t pgprot); > vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long = addr, > pfn_t pfn); > +vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned = long addr, > + pfn_t pfn, pgprot_t pgprot); > vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma, > unsigned long addr, pfn_t pfn); > int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, uns= igned long len); > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 2222fa795284..ac96afdbb4bc 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -307,7 +307,12 @@ struct vm_area_struct { > /* Second cache line starts here. */ > =20 > struct mm_struct *vm_mm; /* The address space we belong to. */ > - pgprot_t vm_page_prot; /* Access permissions of this VMA. */ > + > + /* > + * Access permissions of this VMA. > + * See vmf_insert_mixed() for discussion. > + */ > + pgprot_t vm_page_prot; > unsigned long vm_flags; /* Flags, see mm.h. */ > =20 > /* > diff --git a/mm/memory.c b/mm/memory.c > index b1ca51a079f2..269a8a871e83 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -1646,6 +1646,9 @@ static vm_fault_t insert_pfn(struct vm_area_struc= t *vma, unsigned long addr, > * vmf_insert_pfn_prot should only be used if using multiple VMAs is > * impractical. > * > + * See vmf_insert_mixed_prot() for a discussion of the implication of = using > + * a value of @pgprot different from that of @vma->vm_page_prot. > + * > * Context: Process context. May allocate using %GFP_KERNEL. > * Return: vm_fault_t value. > */ > @@ -1719,9 +1722,9 @@ static bool vm_mixed_ok(struct vm_area_struct *vm= a, pfn_t pfn) > } > =20 > static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, > - unsigned long addr, pfn_t pfn, bool mkwrite) > + unsigned long addr, pfn_t pfn, pgprot_t pgprot, > + bool mkwrite) > { > - pgprot_t pgprot =3D vma->vm_page_prot; > int err; > =20 > BUG_ON(!vm_mixed_ok(vma, pfn)); > @@ -1764,10 +1767,42 @@ static vm_fault_t __vm_insert_mixed(struct vm_a= rea_struct *vma, > return VM_FAULT_NOPAGE; > } > =20 > +/** > + * vmf_insert_mixed_prot - insert single pfn into user vma with specif= ied pgprot > + * @vma: user vma to map to > + * @addr: target user address of this page > + * @pfn: source kernel pfn > + * @pgprot: pgprot flags for the inserted page > + * > + * This is exactly like vmf_insert_mixed(), except that it allows driv= ers to > + * to override pgprot on a per-page basis. > + * > + * Typically this function should be used by drivers to set caching- a= nd > + * encryption bits different than those of @vma->vm_page_prot, because > + * the caching- or encryption mode may not be known at mmap() time. > + * This is ok as long as @vma->vm_page_prot is not used by the core vm > + * to set caching and encryption bits for those vmas (except for COW p= ages). > + * This is ensured by core vm only modifying these page table entries = using > + * functions that don't touch caching- or encryption bits, using pte_m= odify() > + * if needed. (See for example mprotect()). > + * Also when new page-table entries are created, this is only done usi= ng the > + * fault() callback, and never using the value of vma->vm_page_prot, > + * except for page-table entries that point to anonymous pages as the = result > + * of COW. > + * > + * Context: Process context. May allocate using %GFP_KERNEL. > + * Return: vm_fault_t value. > + */ > +vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned = long addr, > + pfn_t pfn, pgprot_t pgprot) > +{ > + return __vm_insert_mixed(vma, addr, pfn, pgprot, false); > +} > + > vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long = addr, > pfn_t pfn) > { > - return __vm_insert_mixed(vma, addr, pfn, false); > + return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, false); > } > EXPORT_SYMBOL(vmf_insert_mixed); > =20 > @@ -1779,7 +1814,7 @@ EXPORT_SYMBOL(vmf_insert_mixed); > vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma, > unsigned long addr, pfn_t pfn) > { > - return __vm_insert_mixed(vma, addr, pfn, true); > + return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, true); > } > EXPORT_SYMBOL(vmf_insert_mixed_mkwrite); > =20 > --=20 > 2.21.0 --=20 Michal Hocko SUSE Labs