From: Mike Kravetz <mike.kravetz@oracle.com>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Michal Hocko <mhocko@kernel.org>, Hugh Dickins <hughd@google.com>,
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
"Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>,
Andrea Arcangeli <aarcange@redhat.com>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
Davidlohr Bueso <dave@stgolabs.net>
Subject: Re: [PATCH RFC 1/1] hugetlbfs: introduce truncation/fault mutex to avoid races
Date: Mon, 8 Oct 2018 17:20:15 -0700 [thread overview]
Message-ID: <7b35a36e-62a0-ca84-00cf-a12a3233cb07@oracle.com> (raw)
In-Reply-To: <20181008080323.xg3v35uxgmakf6wy@kshutemo-mobl1>
On 10/8/18 1:03 AM, Kirill A. Shutemov wrote:
> On Sun, Oct 07, 2018 at 04:38:48PM -0700, Mike Kravetz wrote:
>> The following hugetlbfs truncate/page fault race can be recreated
>> with programs doing something like the following.
>>
>> A huegtlbfs file is mmap(MAP_SHARED) with a size of 4 pages. At
>> mmap time, 4 huge pages are reserved for the file/mapping. So,
>> the global reserve count is 4. In addition, since this is a shared
>> mapping an entry for 4 pages is added to the file's reserve map.
>> The first 3 of the 4 pages are faulted into the file. As a result,
>> the global reserve count is now 1.
>>
>> Task A starts to fault in the last page (routines hugetlb_fault,
>> hugetlb_no_page). It allocates a huge page (alloc_huge_page).
>> The reserve map indicates there is a reserved page, so this is
>> used and the global reserve count goes to 0.
>>
>> Now, task B truncates the file to size 0. It starts by setting
>> inode size to 0(hugetlb_vmtruncate). It then unmaps all mapping
>> of the file (hugetlb_vmdelete_list). Since task A's page table
>> lock is not held at the time, truncation is not blocked. Truncation
>> removes the 3 pages from the file (remove_inode_hugepages). When
>> cleaning up the reserved pages (hugetlb_unreserve_pages), it notices
>> the reserve map was for 4 pages. However, it has only freed 3 pages.
>> So it assumes there is still (4 - 3) 1 reserved pages. It then
>> decrements the global reserve count by 1 and it goes negative.
>>
>> Task A then continues the page fault process and adds it's newly
>> acquired page to the page cache. Note that the index of this page
>> is beyond the size of the truncated file (0). The page fault process
>> then notices the file has been truncated and exits. However, the
>> page is left in the cache associated with the file.
>>
>> Now, if the file is immediately deleted the truncate code runs again.
>> It will find and free the one page associated with the file. When
>> cleaning up reserves, it notices the reserve map is empty. Yet, one
>> page freed. So, the global reserve count is decremented by (0 - 1) -1.
>> This returns the global count to 0 as it should be. But, it is
>> possible for someone else to mmap this file/range before it is deleted.
>> If this happens, a reserve map entry for the allocated page is created
>> and the reserved page is forever leaked.
>>
>> To avoid all these conditions, let's simply prevent faults to a file
>> while it is being truncated. Add a new truncation specific rw mutex
>> to hugetlbfs inode extensions. faults take the mutex in read mode,
>> truncation takes in write mode.
>
> Hm. Don't we have already a lock for this? I mean i_mmap_lock.
>
Thanks Kirill,
Yes, we could use use i_mmap_rwsem for this synchronization. I don't
see anyone else using the mutex in this manner. hugetlb code only
explicitly takes this mutex in write mode today. I suspect that is not
optimal and could be improved. Certainly, the use within
hugetlb_fault->huge_pte_alloc->huge_pmd_share would need to be changed
if we always wanted to take the mutex in read mode during faults.
I'll work on the changes to use i_mmap_rwsem.
However, right now our DB team informs me that the truncate/fault race
is not the cause of their huge page reserve count going negative issue.
So, I am searching for more bugs in this area. Found another where an
allocation for migration could race with a fault in a VM_NORESERVE vma.
But, there were no migrations noted on the system, so there must be another
bug. Sigh!
--
Mike Kravetz
prev parent reply other threads:[~2018-10-09 0:20 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-07 23:38 [PATCH RFC 0/1] hugetlbfs: fix truncate/fault races Mike Kravetz
2018-10-07 23:38 ` [PATCH RFC 1/1] hugetlbfs: introduce truncation/fault mutex to avoid races Mike Kravetz
2018-10-08 8:03 ` Kirill A. Shutemov
2018-10-09 0:20 ` Mike Kravetz [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7b35a36e-62a0-ca84-00cf-a12a3233cb07@oracle.com \
--to=mike.kravetz@oracle.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.vnet.ibm.com \
--cc=dave@stgolabs.net \
--cc=hughd@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=n-horiguchi@ah.jp.nec.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox