From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 029E4C77B61 for ; Mon, 10 Apr 2023 13:39:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A3D6280020; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 582E828001D; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 344C228001E; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id EFA8A28001C for ; Mon, 10 Apr 2023 09:39:52 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C53C8140C3C for ; Mon, 10 Apr 2023 13:39:52 +0000 (UTC) X-FDA: 80665589424.30.50B52ED Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf24.hostedemail.com (Postfix) with ESMTP id B58D0180018 for ; Mon, 10 Apr 2023 13:39:49 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681133991; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=c0rUJV5y2onJPdtWdHOKNOMk90l6YB7IIaQD3cd1B+k=; b=T5odVNqKAw59/Tez4LcIyu9LjTEewIoi68yjNt7kZV6rC7n8gH6h/zE1QJAuXuWOldeTyz cWEwC0FQDsYlG/QUDlsnv4uKOSNzfdEQmyu1PmsWMrRaK+DbCE8oreqY5z0XFUHDNGPkCG PSbY6SSUQ+jolTKMtboQlNisHi9yyqM= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681133991; a=rsa-sha256; cv=none; b=ml4RNYx3jQhUAtDCd6Unzvi27hCY0ci4JQ2UQjNsbUkOV1ppErhbvWOhYySiA5Y7EtoKPE IlsL8kwswfZqafM8GXqA8IcZF7JcEVaHPmWVlQrbZzZXsGvAPygrvFefkW1xm8/ZxpP92N 6tqiSf0Xpa/fch/ljqzpDog3pi0oSJ4= Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Pw9373GfFzSrKP; Mon, 10 Apr 2023 21:35:47 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 10 Apr 2023 21:39:40 +0800 From: Peng Zhang To: , , , , , , CC: , , , ZhangPeng Subject: [PATCH v6 1/6] userfaultfd: convert mfill_atomic_pte_copy() to use a folio Date: Mon, 10 Apr 2023 21:39:27 +0800 Message-ID: <20230410133932.32288-2-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230410133932.32288-1-zhangpeng362@huawei.com> References: <20230410133932.32288-1-zhangpeng362@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: wgs7rgnrpbta3u6ptemg1do9qd1a1duf X-Rspamd-Queue-Id: B58D0180018 X-HE-Tag: 1681133989-418438 X-HE-Meta: U2FsdGVkX18AcpA/FqIejzzooz7yOwQm5lKuqXC3SJVObkUWGXWmsF/FFcAVPxAkIGcpf+P2l05tSFXNx62qSdaNkjcKXaA/BTeSGZ1AOqt7x+K26PxsbzcZCouWBtWYfhWph7sd3wWa4vnxSHnIju4AZvVvdkigI2yGjTOQ5l8RoBgE2f3sxwFLxaUyAh0DEHJavikuGhlm6QF/b5mIzYxlFiIZkPRtd2aUvDxzKDDtTI1jkYv0FwVrUGCK78XM/CWiV4vYDWzrszHzIiO18oNGcnowTWF78scRb2B2fg7WTveyiFdytPnjSVkR/aIuHw4+F3G2bGF5Dga5lra+EO8DVuO+LBfnr1xnVNTGwa/G308O9yqJoI7xJq/o+Xk8CLBPfUzBKZ2kIg9ASqjYqYlIjAYaswUn6BNKqumvKOgF9ESC7tx/Sco7/hOiAQTlsxcz2M52ACWmmUhJ4laLG3nxVE77PMWMTzWnbv8Yoj9pitVa4JHeVT9qhA5d9kTOE8Q8ryUCs19q8YhBsshUd3Cyw78E85IvIqqIO3QjB3U2zCme8mYF3xlNGbj1Iwdkq5jDa6Npau3VS7FeFd2fvzl98lhBUhamHM6nyAEE6RiE5rSi9CJ/QEnSnEv1YwJO4oanC5GtrsipqRAUX5B5hRnY0kn4wDKfU4uUZQW9XYNVM/vihJVW3B+qYIfwuHmX9TEXnkgFtUXncY4oInbyU+ohaWEJjscSYSQgX0GLX/nDuHAW55puNcLxaBlV3Iz7NkRepvlOoX/tZmQ5Zhr+yYNWvQHn+jnXBAWDKdgMURpItpBWGu2goI0bW7GzSBpq4TEPKAx3TdVVgGt2ZnL6LYJYsbpkBr3XPLjOHU28gCSN9HbrQ+eYS2ltLKnoGlz/fkbROuZKTGmV222Tuq0nVF8EaqfcITbzEC0KLvFOgIl+ABYGDZYE1PtcDS59xcN5HVBt0QeDXHs97Ra+tVB eA2/5lRp fYR6dHIwiKKAWkX9w6Hvd7sJDzpUEWIVWcFsJnZV9tVm9UZR46/n904AHFJW0XPQEy5K87K2bSbpuPfHx9XRStPSeD8X9zfrxLsI5pmAZGHrlIHSPz2K2Hiv5isX7tcokdczWQRUkVyOsMOX2allvoOcMJWJLNaLuE9f4hiRk5MHfVy9LpySxSu4MA0O5JHVGW7px9jz+qfuat66PK7IR7nFVEBHj5hCw26PW7ggWxxXxfEqrvbF4c9rvsoaQeDrTKvEB9ZTXza1ntziQfDaitL/aJzmLpEDYQ8G/z0S6AP4zVBU0B994kmwmjg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: ZhangPeng Call vma_alloc_folio() directly instead of alloc_page_vma() and convert page_kaddr to kaddr in mfill_atomic_pte_copy(). Removes several calls to compound_head(). Signed-off-by: ZhangPeng Reviewed-by: Sidhartha Kumar Reviewed-by: Mike Kravetz --- mm/userfaultfd.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 7f1b5f8b712c..313bc683c2b6 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -135,17 +135,18 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, uffd_flags_t flags, struct page **pagep) { - void *page_kaddr; + void *kaddr; int ret; - struct page *page; + struct folio *folio; if (!*pagep) { ret = -ENOMEM; - page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr); - if (!page) + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, dst_vma, + dst_addr, false); + if (!folio) goto out; - page_kaddr = kmap_local_page(page); + kaddr = kmap_local_folio(folio, 0); /* * The read mmap_lock is held here. Despite the * mmap_lock being read recursive a deadlock is still @@ -162,45 +163,44 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, * and retry the copy outside the mmap_lock. */ pagefault_disable(); - ret = copy_from_user(page_kaddr, - (const void __user *) src_addr, + ret = copy_from_user(kaddr, (const void __user *) src_addr, PAGE_SIZE); pagefault_enable(); - kunmap_local(page_kaddr); + kunmap_local(kaddr); /* fallback to copy_from_user outside mmap_lock */ if (unlikely(ret)) { ret = -ENOENT; - *pagep = page; + *pagep = &folio->page; /* don't free the page */ goto out; } - flush_dcache_page(page); + flush_dcache_folio(folio); } else { - page = *pagep; + folio = page_folio(*pagep); *pagep = NULL; } /* - * The memory barrier inside __SetPageUptodate makes sure that + * The memory barrier inside __folio_mark_uptodate makes sure that * preceding stores to the page contents become visible before * the set_pte_at() write. */ - __SetPageUptodate(page); + __folio_mark_uptodate(folio); ret = -ENOMEM; - if (mem_cgroup_charge(page_folio(page), dst_vma->vm_mm, GFP_KERNEL)) + if (mem_cgroup_charge(folio, dst_vma->vm_mm, GFP_KERNEL)) goto out_release; ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, - page, true, flags); + &folio->page, true, flags); if (ret) goto out_release; out: return ret; out_release: - put_page(page); + folio_put(folio); goto out; } -- 2.25.1