From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B428C433E0 for ; Thu, 28 Jan 2021 00:09:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 25B8664DD1 for ; Thu, 28 Jan 2021 00:09:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 25B8664DD1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 774606B0005; Wed, 27 Jan 2021 19:09:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 725B26B006C; Wed, 27 Jan 2021 19:09:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63ACA6B006E; Wed, 27 Jan 2021 19:09:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id 4BECD6B0005 for ; Wed, 27 Jan 2021 19:09:24 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0A53D181AEF21 for ; Thu, 28 Jan 2021 00:09:24 +0000 (UTC) X-FDA: 77753249448.13.badge71_3f05c972759b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id D40A718140B60 for ; Thu, 28 Jan 2021 00:09:23 +0000 (UTC) X-HE-Tag: badge71_3f05c972759b X-Filterd-Recvd-Size: 2484 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Thu, 28 Jan 2021 00:09:23 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 151D764DBD; Thu, 28 Jan 2021 00:09:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1611792562; bh=GUNbpAFQoRcexAEW65OXiQTw8gUoU09C4H2pJGNY8w8=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=ejDKEIKjTrhulX74BMX2PUUpa2oAmG/+Kyp6rdJZ038ZUiiK3ANEhEVcwC80PP0zn 8uqmlayZ+K0Zn6t6PT6IowaLLDtVfSeKns6UgF8MTSzYlZGzSyPN1f79nu4mslGBLz yhnxAHJvlbTunaa2wBE+H4M9ZQYu/mwqlw1Pb6RI= Date: Wed, 27 Jan 2021 16:09:21 -0800 From: Andrew Morton To: Miaohe Lin Cc: , , , , , , Subject: Re: [PATCH] mm/rmap: Fix potential pte_unmap on an not mapped pte Message-Id: <20210127160921.989f01c83d6703148f6bc316@linux-foundation.org> In-Reply-To: <20210127093349.39081-1-linmiaohe@huawei.com> References: <20210127093349.39081-1-linmiaohe@huawei.com> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 27 Jan 2021 04:33:49 -0500 Miaohe Lin wrote: > For PMD-mapped page (usually THP), pvmw->pte is NULL. For PTE-mapped THP, > pvmw->pte is mapped. But for HugeTLB pages, pvmw->pte is not mapped and set > to the relevant page table entry. So in page_vma_mapped_walk_done(), we may > do pte_unmap() for HugeTLB pte which is not mapped. Fix this by checking > pvmw->page against PageHuge before trying to do pte_unmap(). > What are the runtime consequences of this? Is there a workload which is known to trigger it? IOW, how do we justify a -stable backport of this fix? > > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -213,7 +213,8 @@ struct page_vma_mapped_walk { > > static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) > { > - if (pvmw->pte) > + /* HugeTLB pte is set to the relevant page table entry without pte_mapped. */ > + if (pvmw->pte && !PageHuge(pvmw->page)) > pte_unmap(pvmw->pte); > if (pvmw->ptl) > spin_unlock(pvmw->ptl); > -- > 2.19.1