From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E6D2C7619A for ; Sat, 8 Apr 2023 04:43:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B314D900004; Sat, 8 Apr 2023 00:43:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ADFAE6B0074; Sat, 8 Apr 2023 00:43:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A6DF900004; Sat, 8 Apr 2023 00:43:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8D0616B0072 for ; Sat, 8 Apr 2023 00:43:38 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 63F9F12028C for ; Sat, 8 Apr 2023 04:43:38 +0000 (UTC) X-FDA: 80656980516.20.38881AB Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf24.hostedemail.com (Postfix) with ESMTP id 1DB00180008 for ; Sat, 8 Apr 2023 04:43:34 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf24.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680929016; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8jsQYKRLYyQaDendIljW4owq3LEY4wfUCYrBwlhtlpQ=; b=iEeSD1Mt1HdzUk8QmOIE6SmpX/A4haQZe/irmW9O5hF6n18XtQ3QEWZ9hvpP2XYAkbzSfI jKUjE+0e58/8PSmKj+182TvYTPt90z/CkAbkrD19Jxn1D44mbx/t6I4tIlpnmzB84Q0EhU DAMwaZn2AALc3N3z8dyUGQP/HdbPaww= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf24.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680929016; a=rsa-sha256; cv=none; b=wOy45vWdm0XprgYmYeBm6skdkFgvoMsRKlb3txAYW4G1XMgPCz7DtKn5IeSTZLOyoyoPsK upgw+oE/LKLYhhP4IWWJaBuX8dulcYiQWRAdwCtWBlWm/P2Kr8WfVwWt9eOjoB20gFFFXm 2lnmcstWyLFPkwtEqOy7ArNY27hmTIk= Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4PtjFt0yCHz17KHN; Sat, 8 Apr 2023 12:40:02 +0800 (CST) Received: from [10.174.179.160] (10.174.179.160) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Sat, 8 Apr 2023 12:43:29 +0800 Message-ID: Date: Sat, 8 Apr 2023 12:43:28 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.9.0 Subject: Re: [PATCH v5 3/6] userfaultfd: convert copy_huge_page_from_user() to copy_folio_from_user() Content-Language: en-US To: Vishal Moola CC: , , , , , , , , References: <20230331093937.945725-1-zhangpeng362@huawei.com> <20230331093937.945725-4-zhangpeng362@huawei.com> From: "zhangpeng (AS)" In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.160] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 1DB00180008 X-Stat-Signature: 4x88edkxpggnuz8tshadmnoznj37f8ce X-HE-Tag: 1680929014-785291 X-HE-Meta: U2FsdGVkX1/4O63KDGto/Kxk9N2rXQxjtnFUPJspXmQ7KvqDHzS/YbWlQBB+LfIJ4JPKNqdUeMe1BsuWsStgGL5A8V84Gb9iI6DBTigmH02D2FWX2BBY2piTcr4LoS5KUYIPBidge3WpIlz7Ztm9qfDTHqWsosT5kvuMZmh4wURZ34wMQ38r/84F9mEFh9a1MHpU9YJBeI9QlT+woE5/zorOeXe6/qe8DKUZvly8ZGOi487SEEx9JVV6pxGqjgXdAunAXvs8Vs/VIsEzkLl3rlJjO2tNYXR0y5ft4T+G6zGCzFq1EGmznyKxZYzI9OdjFA0ze6FiiWyReNOfuh0I5OSmbg5IHe4HwRFH2yqMPyMDlQbp6/ktJ6VdTSG+AHroIjJu17zwIaklBXT+3MSqFG8XGX9Pb/iaQ1yPamhEejPfdVTwdOW0sYNjTDWgDOPVgPPVw5LYtz570HgKE6XvNPtnAdFAu6gb04PpszR8qN/kjKJBdIzBVB3D5lqV5W7UmKs3pQ3nLoZcMurbO+4205FbJVWI4GXkrERSnIvU5bRHeB0wiYVwyl0wH7RBNK78gSxgyQyuryW1sxd/T0xCnM4mAlG3tGOFYU/xsZNxRYqD9IDAkhr8TQinftjrHA6EvYZlRb65ULF+rJOeCv0T1EmyLWlj2jpuHIf4KIEXOv+hvE7h2k4bSX+QpF6mWmut/KFiNaN4Mb0q125P1ZbkxJQubUO7LYK/+Ov+o/1e9dGCE6TY6R2vqvNaS9LfEdfzkskfEI24jr+LLqFBIh2WhaVPh2IkzdBpBTKRtSp7LM4tj+KBVk3kPARVaybrz3aCZCYYU7zlrJAn8vUucJHmJCbHtIXnm1+kkyr6/ZZ2fsmGFA9M7G7E4f6Dm5ntG1wvkum7oXdFYuH+NWRa9kwkoYXJGJmtT8sM4MTFpqbuCiMAEwLhpxBD6ddV8A1O0L+cKYonOdod6VHmvfZeC2V hcBob9GM z/sfS3c1ij8TnUkXn17LafcEQE5DfSgNJugTZMOvTTdNE6BE0RU0gfVa4WXRZB2V6uBQKOiniThWGLuWeoUfSwyQ8RZs5Ig0WsHve0zWf402vM242xO0veNl2B+vnES2Qrtywg9jTwUUQjt0C7rT/TDST2OHz2DFMuYqf/GU1CFTcY3SLO/clBu014YE6mB+lEm3v/Ggam8LykcoarPBtNaf1VjjNIIRq8zlxAw1h+6hTnnm6ILlhg7j83fPZdQE3WWDB90pjLYN05C+fYHodepStsW2eM81Fy0YU+9EqeWnsM6zaIkU9w5hXvw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/4/7 10:28, Vishal Moola wrote: > On Fri, Mar 31, 2023 at 2:41 AM Peng Zhang wrote: >> From: ZhangPeng >> >> Replace copy_huge_page_from_user() with copy_folio_from_user(). >> copy_folio_from_user() does the same as copy_huge_page_from_user(), but >> takes in a folio instead of a page. Convert page_kaddr to kaddr in >> copy_folio_from_user() to do indenting cleanup. >> >> Signed-off-by: ZhangPeng >> Reviewed-by: Sidhartha Kumar >> --- >> include/linux/mm.h | 7 +++---- >> mm/hugetlb.c | 5 ++--- >> mm/memory.c | 26 ++++++++++++-------------- >> mm/userfaultfd.c | 6 ++---- >> 4 files changed, 19 insertions(+), 25 deletions(-) >> >> diff --git a/include/linux/mm.h b/include/linux/mm.h >> index e249208f8fbe..cf4d773ca506 100644 >> --- a/include/linux/mm.h >> +++ b/include/linux/mm.h >> @@ -3682,10 +3682,9 @@ extern void copy_user_huge_page(struct page *dst, struct page *src, >> unsigned long addr_hint, >> struct vm_area_struct *vma, >> unsigned int pages_per_huge_page); >> -extern long copy_huge_page_from_user(struct page *dst_page, >> - const void __user *usr_src, >> - unsigned int pages_per_huge_page, >> - bool allow_pagefault); >> +long copy_folio_from_user(struct folio *dst_folio, >> + const void __user *usr_src, >> + bool allow_pagefault); >> >> /** >> * vma_is_special_huge - Are transhuge page-table entries considered special? >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index 7e4a80769c9e..aade1b513474 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -6217,9 +6217,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, >> goto out; >> } >> >> - ret = copy_huge_page_from_user(&folio->page, >> - (const void __user *) src_addr, >> - pages_per_huge_page(h), false); >> + ret = copy_folio_from_user(folio, (const void __user *) src_addr, >> + false); >> >> /* fallback to copy_from_user outside mmap_lock */ >> if (unlikely(ret)) { >> diff --git a/mm/memory.c b/mm/memory.c >> index 808f354bce65..4976422b6979 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -5868,35 +5868,33 @@ void copy_user_huge_page(struct page *dst, struct page *src, >> process_huge_page(addr_hint, pages_per_huge_page, copy_subpage, &arg); >> } >> >> -long copy_huge_page_from_user(struct page *dst_page, >> - const void __user *usr_src, >> - unsigned int pages_per_huge_page, >> - bool allow_pagefault) >> +long copy_folio_from_user(struct folio *dst_folio, >> + const void __user *usr_src, >> + bool allow_pagefault) >> { >> - void *page_kaddr; >> + void *kaddr; >> unsigned long i, rc = 0; >> - unsigned long ret_val = pages_per_huge_page * PAGE_SIZE; >> + unsigned int nr_pages = folio_nr_pages(dst_folio); >> + unsigned long ret_val = nr_pages * PAGE_SIZE; >> struct page *subpage; >> >> - for (i = 0; i < pages_per_huge_page; i++) { >> - subpage = nth_page(dst_page, i); >> - page_kaddr = kmap_local_page(subpage); >> + for (i = 0; i < nr_pages; i++) { >> + subpage = folio_page(dst_folio, i); >> + kaddr = kmap_local_page(subpage); >> if (!allow_pagefault) >> pagefault_disable(); >> - rc = copy_from_user(page_kaddr, >> - usr_src + i * PAGE_SIZE, PAGE_SIZE); >> + rc = copy_from_user(kaddr, usr_src + i * PAGE_SIZE, PAGE_SIZE); >> if (!allow_pagefault) >> pagefault_enable(); >> - kunmap_local(page_kaddr); >> + kunmap_local(kaddr); >> >> ret_val -= (PAGE_SIZE - rc); >> if (rc) >> break; >> >> - flush_dcache_page(subpage); >> - >> cond_resched(); >> } >> + flush_dcache_folio(dst_folio); >> return ret_val; >> } > Moving the flush_dcache_page() outside the loop to be > flush_dcache_folio() changes the behavior of the function. > > Initially, if it fails to copy the entire page, the function breaks out > of the loop and returns the number of unwritten bytes without > flushing the page from the cache. Now if it fails, it will still flush > out the page it failed on, as well as any later pages it may not > have gotten to yet. Agreed. If it fails, could we just not flush the folio? Like this: long copy_folio_from_user(...) { ... for (i = 0; i < nr_pages; i++) { ... rc = copy_from_user(kaddr, usr_src + i * PAGE_SIZE, PAGE_SIZE); ... ret_val -= (PAGE_SIZE - rc); if (rc) - break; + return ret_val; cond_resched(); } flush_dcache_folio(dst_folio); return ret_val; } Thanks for your review. Best Regards, Peng