From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A4D0C3F6B0 for ; Thu, 28 Jul 2022 02:02:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9840E940033; Wed, 27 Jul 2022 22:02:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 932DC940012; Wed, 27 Jul 2022 22:02:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 821A6940033; Wed, 27 Jul 2022 22:02:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 73E0C940012 for ; Wed, 27 Jul 2022 22:02:48 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 373E01A0DE2 for ; Thu, 28 Jul 2022 02:02:48 +0000 (UTC) X-FDA: 79734860016.31.66104A3 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf30.hostedemail.com (Postfix) with ESMTP id ECD1F800BE for ; Thu, 28 Jul 2022 02:02:45 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LtYkD135xz1M8QM; Thu, 28 Jul 2022 09:59:48 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 28 Jul 2022 10:02:41 +0800 Subject: Re: [RFC PATCH v4 4/8] hugetlbfs: catch and handle truncate racing with page faults To: Mike Kravetz CC: , , Muchun Song , Michal Hocko , Peter Xu , Naoya Horiguchi , David Hildenbrand , "Aneesh Kumar K . V" , Andrea Arcangeli , "Kirill A . Shutemov" , Davidlohr Bueso , Prakash Sangappa , James Houghton , Mina Almasry , Pasha Tatashin , Axel Rasmussen , Ray Fucillo , Andrew Morton References: <20220706202347.95150-1-mike.kravetz@oracle.com> <20220706202347.95150-5-mike.kravetz@oracle.com> From: Miaohe Lin Message-ID: Date: Thu, 28 Jul 2022 10:02:40 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658973767; a=rsa-sha256; cv=none; b=j/4kb/gXIqfOXBxq6U88m5WAk1pBmUOouU+sqTfPQ/SRturKBkaZzxtnK/QtUIsdYrX//z 9Bfrvn1vEoiDhNgrEY4HqA+FiILZj6jqNrvrebDHs5v4PU55nsmNBpYfONdY9Wbaoer4zG mswu8wEnepwRDId8KGBEGhkMvdkRMao= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658973767; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GNux9saLrV2HpQvcpkQ8sM/W8TnGq5SIZSpLsO1e9Yw=; b=hMtGkM8VWvXvKDDBcLRLPqimQC9/PmHHZ9szKeZ1OGwBFA4frlY+PtiraFb5WpWFRx7vsZ xYzLb32bzprvJQPU4kAWXUIKQzoyomrL451cfPkHyYRMX6h2uDPzt7rRB52VJPDyVgdNz6 tTJ288qIut/5OFPuLiJx1N+WE6YpxIc= X-Rspamd-Queue-Id: ECD1F800BE X-Rspam-User: Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf30.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com X-Rspamd-Server: rspam09 X-Stat-Signature: da9cz49bkxjd8e9dt1f7rokmpamjtrjr X-HE-Tag: 1658973765-914301 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/7/28 3:00, Mike Kravetz wrote: > On 07/27/22 17:20, Miaohe Lin wrote: >> On 2022/7/7 4:23, Mike Kravetz wrote: >>> Most hugetlb fault handling code checks for faults beyond i_size. >>> While there are early checks in the code paths, the most difficult >>> to handle are those discovered after taking the page table lock. >>> At this point, we have possibly allocated a page and consumed >>> associated reservations and possibly added the page to the page cache. >>> >>> When discovering a fault beyond i_size, be sure to: >>> - Remove the page from page cache, else it will sit there until the >>> file is removed. >>> - Do not restore any reservation for the page consumed. Otherwise >>> there will be an outstanding reservation for an offset beyond the >>> end of file. >>> >>> The 'truncation' code in remove_inode_hugepages must deal with fault >>> code potentially removing a page/folio from the cache after the page was >>> returned by filemap_get_folios and before locking the page. This can be >>> discovered by a change in folio_mapping() after taking folio lock. In >>> addition, this code must deal with fault code potentially consuming >>> and returning reservations. To synchronize this, remove_inode_hugepages >>> will now take the fault mutex for ALL indices in the hole or truncated >>> range. In this way, it KNOWS fault code has finished with the page/index >>> OR fault code will see the updated file size. >>> >>> Signed-off-by: Mike Kravetz >>> --- >> >> >> >>> @@ -5606,8 +5610,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, >>> >>> ptl = huge_pte_lock(h, mm, ptep); >>> size = i_size_read(mapping->host) >> huge_page_shift(h); >>> - if (idx >= size) >>> + if (idx >= size) { >>> + beyond_i_size = true; >> >> Thanks for your patch. There is one question: >> >> Since races between hugetlb pagefault and truncate is guarded by hugetlb_fault_mutex, >> do we really need to check it again after taking the page table lock? >> > > Well, the fault mutex can only guard a single hugetlb page. The fault mutex > is actually an array/table of mutexes hashed by mapping address and file index. > So, during truncation we take take the mutex for each page as they are > unmapped and removed. So, the fault mutex only synchronizes operations > on one specific page. The idea with this patch is to coordinate the fault > code and truncate code when operating on the same page. > > In addition, changing the file size happens early in the truncate process > before taking any locks/mutexes. I wonder whether we can somewhat live with it to make code simpler. When changing the file size happens after checking i_size but before taking the page table lock in hugetlb_fault, the truncate code would remove the hugetlb page from the page cache for us after hugetlb_fault finishes if we don't roll back when checking i_size again under the page table lock? In a word, if hugetlb_fault see a truncated inode, back out early. If not, let truncate code does its work. So we don't need to complicate the already complicated error path. Or am I miss something? Thanks. >