From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6A66C433FE for ; Thu, 20 Oct 2022 02:03:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C57B6B0071; Wed, 19 Oct 2022 22:03:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 24E2F6B0073; Wed, 19 Oct 2022 22:03:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F0096B0074; Wed, 19 Oct 2022 22:03:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F08F96B0071 for ; Wed, 19 Oct 2022 22:03:35 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B866D12018D for ; Thu, 20 Oct 2022 02:03:35 +0000 (UTC) X-FDA: 80039681190.12.D699228 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf15.hostedemail.com (Postfix) with ESMTP id DDBEAA0034 for ; Thu, 20 Oct 2022 02:03:32 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Mt9kC3nJfzmVBQ; Thu, 20 Oct 2022 09:58:43 +0800 (CST) Received: from [10.174.151.185] (10.174.151.185) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Thu, 20 Oct 2022 10:03:27 +0800 Subject: Re: [PATCH v2] hugetlb: fix memory leak associated with vma_lock structure To: Mike Kravetz , , CC: Muchun Song , David Hildenbrand , Sven Schnelle , Michal Hocko , Peter Xu , Naoya Horiguchi , "Aneesh Kumar K . V" , Andrea Arcangeli , "Kirill A . Shutemov" , Davidlohr Bueso , Prakash Sangappa , James Houghton , Mina Almasry , Pasha Tatashin , Axel Rasmussen , Ray Fucillo , Andrew Morton References: <20221019201957.34607-1-mike.kravetz@oracle.com> From: Miaohe Lin Message-ID: <16389ecd-3c5f-2452-b738-a426397feee4@huawei.com> Date: Thu, 20 Oct 2022 10:03:27 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20221019201957.34607-1-mike.kravetz@oracle.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.151.185] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666231415; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZaFsFCINAHacktiXCyvS0PCEthQnat8q76y2hKuemko=; b=g98IzBZatCHhXMWW2WiuHD+UTNfcsErmaMTqF9qPUj7Cm2bvBTGJiSquEEnZ/fOxnS9MAw m6DJkeJlNS6vCHrggZzLBQRoDASZMZCfb5L8qWpCIbPaK9vSBDPJqqm83Mng3Q12wkpmN1 ClvAlqQQtKr03F5HD9CjP8eo4MCxOuE= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666231415; a=rsa-sha256; cv=none; b=rNKFtsBXSzmzGw1rJAQLHIF8oujmt/w8lvj4H9JToXuHOTyz7wjii4yQFMTSG5M580J97i ma+Sai84h3G+2y1b7pOyBQyJ/awbzk6UhpA87vOLCypvg2M1SdaWntwAihhlHflfy1zKuX ok9Z1/dUPi1jBs8jZxcax1tn7DDd1/s= X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: DDBEAA0034 X-Rspam-User: Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Stat-Signature: az9nqnoejawnf6oewgrursc84kq59aay X-HE-Tag: 1666231412-613392 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/10/20 4:19, Mike Kravetz wrote: > The hugetlb vma_lock structure hangs off the vm_private_data pointer > of sharable hugetlb vmas. The structure is vma specific and can not > be shared between vmas. At fork and various other times, vmas are > duplicated via vm_area_dup(). When this happens, the pointer in the > newly created vma must be cleared and the structure reallocated. Two > hugetlb specific routines deal with this hugetlb_dup_vma_private and > hugetlb_vm_op_open. Both routines are called for newly created vmas. > hugetlb_dup_vma_private would always clear the pointer and > hugetlb_vm_op_open would allocate the new vms_lock structure. This did > not work in the case of this calling sequence pointed out in [1]. > move_vma > copy_vma > new_vma = vm_area_dup(vma); > new_vma->vm_ops->open(new_vma); --> new_vma has its own vma lock. > is_vm_hugetlb_page(vma) > clear_vma_resv_huge_pages > hugetlb_dup_vma_private --> vma->vm_private_data is set to NULL > When clearing hugetlb_dup_vma_private we actually leak the associated > vma_lock structure. > > The vma_lock structure contains a pointer to the associated vma. This > information can be used in hugetlb_dup_vma_private and hugetlb_vm_op_open > to ensure we only clear the vm_private_data of newly created (copied) > vmas. In such cases, the vma->vma_lock->vma field will not point to the > vma. > > Update hugetlb_dup_vma_private and hugetlb_vm_op_open to not clear > vm_private_data if vma->vma_lock->vma == vma. Also, log a warning if > hugetlb_vm_op_open ever encounters the case where vma_lock has already > been correctly allocated for the vma. > > [1] https://lore.kernel.org/linux-mm/5154292a-4c55-28cd-0935-82441e512fc3@huawei.com/ > > Fixes: 131a79b474e9 ("hugetlb: fix vma lock handling during split vma and range unmapping") > Signed-off-by: Mike Kravetz Thanks for update. This patch looks good to me. Reviewed-by: Miaohe Lin Thanks, Miaohe Lin > --- > v2 - Allocate vma_lock in hugetlb_vm_op_open if !vma->vm_private_data > on entry. Thanks Miaohe Lin! > > mm/hugetlb.c | 35 +++++++++++++++++++++++++++-------- > 1 file changed, 27 insertions(+), 8 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 02f781624fce..ccdffc2fa1ca 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1014,15 +1014,23 @@ void hugetlb_dup_vma_private(struct vm_area_struct *vma) > VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma); > /* > * Clear vm_private_data > + * - For shared mappings this is a per-vma semaphore that may be > + * allocated in a subsequent call to hugetlb_vm_op_open. > + * Before clearing, make sure pointer is not associated with vma > + * as this will leak the structure. This is the case when called > + * via clear_vma_resv_huge_pages() and hugetlb_vm_op_open has already > + * been called to allocate a new structure. > * - For MAP_PRIVATE mappings, this is the reserve map which does > * not apply to children. Faults generated by the children are > * not guaranteed to succeed, even if read-only. > - * - For shared mappings this is a per-vma semaphore that may be > - * allocated in a subsequent call to hugetlb_vm_op_open. > */ > - vma->vm_private_data = (void *)0; > - if (!(vma->vm_flags & VM_MAYSHARE)) > - return; > + if (vma->vm_flags & VM_MAYSHARE) { > + struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; > + > + if (vma_lock && vma_lock->vma != vma) > + vma->vm_private_data = NULL; > + } else > + vma->vm_private_data = NULL; > } > > /* > @@ -4601,6 +4609,7 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma) > struct resv_map *resv = vma_resv_map(vma); > > /* > + * HPAGE_RESV_OWNER indicates a private mapping. > * This new VMA should share its siblings reservation map if present. > * The VMA will only ever have a valid reservation map pointer where > * it is being copied for another still existing VMA. As that VMA > @@ -4615,11 +4624,21 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma) > > /* > * vma_lock structure for sharable mappings is vma specific. > - * Clear old pointer (if copied via vm_area_dup) and create new. > + * Clear old pointer (if copied via vm_area_dup) and allocate > + * new structure. Before clearing, make sure vma_lock is not > + * for this vma. > */ > if (vma->vm_flags & VM_MAYSHARE) { > - vma->vm_private_data = NULL; > - hugetlb_vma_lock_alloc(vma); > + struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; > + > + if (vma_lock) { > + if (vma_lock->vma != vma) { > + vma->vm_private_data = NULL; > + hugetlb_vma_lock_alloc(vma); > + } else > + pr_warn("HugeTLB: vma_lock already exists in %s.\n", __func__); > + } else > + hugetlb_vma_lock_alloc(vma); > } > } > >