From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx128.postini.com [74.125.245.128]) by kanga.kvack.org (Postfix) with SMTP id 206FF6B0070 for ; Fri, 9 Aug 2013 05:27:18 -0400 (EDT) From: Joonsoo Kim Subject: [PATCH v2 20/20] mm, hugetlb: remove a hugetlb_instantiation_mutex Date: Fri, 9 Aug 2013 18:26:38 +0900 Message-Id: <1376040398-11212-21-git-send-email-iamjoonsoo.kim@lge.com> In-Reply-To: <1376040398-11212-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1376040398-11212-1-git-send-email-iamjoonsoo.kim@lge.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: Rik van Riel , Mel Gorman , Michal Hocko , "Aneesh Kumar K.V" , KAMEZAWA Hiroyuki , Hugh Dickins , Davidlohr Bueso , David Gibson , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Joonsoo Kim , Wanpeng Li , Naoya Horiguchi , Hillf Danton , Joonsoo Kim Now, we have prepared to have an infrastructure in order to remove a this awkward mutex which serialize all faulting tasks, so remove it. Signed-off-by: Joonsoo Kim diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0501fe5..f2c3a51 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2504,9 +2504,7 @@ static int unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma, /* * Hugetlb_cow() should be called with page lock of the original hugepage held. - * Called with hugetlb_instantiation_mutex held and pte_page locked so we - * cannot race with other handlers or page migration. - * Keep the pte_same checks anyway to make transition from the mutex easier. + * Called with pte_page locked so we cannot race with page migration. */ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, pte_t *ptep, pte_t pte, @@ -2844,7 +2842,6 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, int ret; struct page *page = NULL; struct page *pagecache_page = NULL; - static DEFINE_MUTEX(hugetlb_instantiation_mutex); struct hstate *h = hstate_vma(vma); address &= huge_page_mask(h); @@ -2864,17 +2861,9 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, if (!ptep) return VM_FAULT_OOM; - /* - * Serialize hugepage allocation and instantiation, so that we don't - * get spurious allocation failures if two CPUs race to instantiate - * the same page in the page cache. - */ - mutex_lock(&hugetlb_instantiation_mutex); entry = huge_ptep_get(ptep); - if (huge_pte_none(entry)) { - ret = hugetlb_no_page(mm, vma, address, ptep, flags); - goto out_mutex; - } + if (huge_pte_none(entry)) + return hugetlb_no_page(mm, vma, address, ptep, flags); ret = 0; @@ -2887,10 +2876,8 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, * consumed. */ if ((flags & FAULT_FLAG_WRITE) && !huge_pte_write(entry)) { - if (vma_needs_reservation(h, vma, address) < 0) { - ret = VM_FAULT_OOM; - goto out_mutex; - } + if (vma_needs_reservation(h, vma, address) < 0) + return VM_FAULT_OOM; if (!(vma->vm_flags & VM_MAYSHARE)) pagecache_page = hugetlbfs_pagecache_page(h, @@ -2939,9 +2926,6 @@ out_page_table_lock: unlock_page(page); put_page(page); -out_mutex: - mutex_unlock(&hugetlb_instantiation_mutex); - return ret; } -- 1.7.9.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org