From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F7CACCA47E for ; Thu, 16 Jun 2022 06:40:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A6F8E6B0072; Thu, 16 Jun 2022 02:40:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A1F2D6B0073; Thu, 16 Jun 2022 02:40:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E7336B0074; Thu, 16 Jun 2022 02:40:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 7BC116B0072 for ; Thu, 16 Jun 2022 02:40:36 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 4ADE0214E2 for ; Thu, 16 Jun 2022 06:40:36 +0000 (UTC) X-FDA: 79583150472.20.8A682B1 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf22.hostedemail.com (Postfix) with ESMTP id 0E6A3C009E for ; Thu, 16 Jun 2022 06:40:34 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LNsvm3gwwzjY8v; Thu, 16 Jun 2022 14:39:00 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 16 Jun 2022 14:40:31 +0800 Subject: Re: [PATCH 2/7] mm/khugepaged: stop swapping in page when VM_FAULT_RETRY occurs To: Yang Shi CC: Andrew Morton , Andrea Arcangeli , Matthew Wilcox , Vlastimil Babka , David Howells , NeilBrown , Alistair Popple , David Hildenbrand , Suren Baghdasaryan , Peter Xu , Linux MM , Linux Kernel Mailing List References: <20220611084731.55155-1-linmiaohe@huawei.com> <20220611084731.55155-3-linmiaohe@huawei.com> From: Miaohe Lin Message-ID: Date: Thu, 16 Jun 2022 14:40:31 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655361635; a=rsa-sha256; cv=none; b=2pQfvX9I9pGuKuSluZSCyIGSNx8j0voXmegni8eUZcXIJOtCeRn0PdTyg/iPAz78KOBKM6 bnqHtD5j+9XeDniwyT/t9InG7FCfyFYaoVpMC7ZkQMl+gFYDlxCkdDRB44jdSQ1gJMkdnr ooBaf3HtuZdQBJFmEwu8JJDE/RVVadY= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf22.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655361635; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4icRPZ1yndavEDzhxvDOt97hdEGG/VSt7HEn7GyaWJk=; b=K1KQfZIQAbNgX7Am1F64/IFVxurJCGu8HZMwgbCAR1SVf5rg/2sSdcsYpof8aRmAP1NmHA qiJZxKMZh+27XGunsmIdfEyUBhVNkMiN00RyBdojJhjGPBSW4oTXXMqDOWIJKuTYis2V99 aLzL+E0Ey19fFw420SMxRsSElpwvqX4= X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0E6A3C009E X-Stat-Signature: 15zgfh5ferj9an96bzto975kc17gqs8i Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf22.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com X-Rspam-User: X-HE-Tag: 1655361634-113954 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/6/16 1:49, Yang Shi wrote: > On Sat, Jun 11, 2022 at 1:47 AM Miaohe Lin wrote: >> >> When do_swap_page returns VM_FAULT_RETRY, we do not retry here and thus >> swap entry will remain in pagetable. This will result in later failure. >> So stop swapping in pages in this case to save cpu cycles. >> >> Signed-off-by: Miaohe Lin >> --- >> mm/khugepaged.c | 19 ++++++++----------- >> 1 file changed, 8 insertions(+), 11 deletions(-) >> >> diff --git a/mm/khugepaged.c b/mm/khugepaged.c >> index 73570dfffcec..a8adb2d1e9c6 100644 >> --- a/mm/khugepaged.c >> +++ b/mm/khugepaged.c >> @@ -1003,19 +1003,16 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, >> swapped_in++; >> ret = do_swap_page(&vmf); >> >> - /* do_swap_page returns VM_FAULT_RETRY with released mmap_lock */ >> + /* >> + * do_swap_page returns VM_FAULT_RETRY with released mmap_lock. >> + * Note we treat VM_FAULT_RETRY as VM_FAULT_ERROR here because >> + * we do not retry here and swap entry will remain in pagetable >> + * resulting in later failure. > > Yeah, it makes sense. > >> + */ >> if (ret & VM_FAULT_RETRY) { >> mmap_read_lock(mm); > > A further optimization, you should not need to relock mmap_lock. You > may consider returning a different value or passing in *locked and > setting it to false, then check this value in the caller to skip > unlock. Could we just keep the mmap_sem unlocked when __collapse_huge_page_swapin() fails due to the caller always doing mmap_read_unlock when __collapse_huge_page_swapin() returns false and add some comments about this behavior? This looks like a simple way for me. > >> - if (hugepage_vma_revalidate(mm, haddr, &vma)) { >> - /* vma is no longer available, don't continue to swapin */ >> - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); >> - return false; >> - } >> - /* check if the pmd is still valid */ >> - if (mm_find_pmd(mm, haddr) != pmd) { >> - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); >> - return false; >> - } >> + trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); >> + return false; >> } >> if (ret & VM_FAULT_ERROR) { >> trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); > > And I think "swapped_in++" needs to be moved after error handling. Do you mean do "swapped_in++" only after pages are swapped in successfully? Thanks! > >> -- >> 2.23.0 >> >> > . >