From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CE91C761A6 for ; Fri, 7 Apr 2023 02:29:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A00326B0072; Thu, 6 Apr 2023 22:28:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9AD5E6B0074; Thu, 6 Apr 2023 22:28:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 89CE46B0075; Thu, 6 Apr 2023 22:28:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 79F7C6B0072 for ; Thu, 6 Apr 2023 22:28:59 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 15110413E6 for ; Fri, 7 Apr 2023 02:28:59 +0000 (UTC) X-FDA: 80653012398.18.B4F6553 Received: from mail-yb1-f181.google.com (mail-yb1-f181.google.com [209.85.219.181]) by imf03.hostedemail.com (Postfix) with ESMTP id 4A55320002 for ; Fri, 7 Apr 2023 02:28:56 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="Cn/Pth/c"; spf=pass (imf03.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.219.181 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680834536; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9GxqrDFIK8A2p38p68cfxj81fd2IhJuglmJgqkZLp48=; b=TSpFUvgZk6A1mKn8gGYtda/VsJDbA8owuJtK0L13En5TZZ4aKvCq2VwBuZj//n3+AJXNEp tVYyJdekwCDuPKjL4D6KT6u5nLfvMW+VPpgF9uk/mAN+alxoyYI8dKEaC/BqnC6/Zx+pFp Dtk6/TvC3IQQCj0ZvH/Lm5EYOWFGlUo= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="Cn/Pth/c"; spf=pass (imf03.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.219.181 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680834536; a=rsa-sha256; cv=none; b=2UVl4wI9YTkkujt1irNIV4MLeSuNQdp0YbYcUQSWOi6AMfGEDTOTci9pXX9lnwoR+CZWIH 5x6G1H6ZDZbCBPNYtOHAKh6HXhBv2qZCsMYd237n0YTOLtvFYJCqB8aOjSMyG9IJXMiSgG Q5+DGYRNXJ5p7KHSv0PEmA1hfFfiMMg= Received: by mail-yb1-f181.google.com with SMTP id i6so47996087ybu.8 for ; Thu, 06 Apr 2023 19:28:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1680834535; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=9GxqrDFIK8A2p38p68cfxj81fd2IhJuglmJgqkZLp48=; b=Cn/Pth/cWuUWjf+rDV5ExZ8oYTPSaaBCwfXA87u3Z2iGqdxBfr/jzZMAjSGoTJf52S 1FOQVO1naOUhIfYesZgK/HcG3YldMbqquKn40XH0dP2DQxQBX7+3jTSwfqCdY5hzPoHX zyVghfeRNQtnUm/HmYYpuujPh3sNl7wVuM56Y/pBQzZb84r3YugPiy+XQkYYjfyDQ5bu 8E5nwzOHpZmq61cn4RXaeNkxpGKr+G/ZZYmwkVB3EJCrdjm+PqKKb/vHqtWKDNce5h21 rVxX7K4/x9/vdEcfAFK3tJdneB9IB/GYuUuv0VGFUMzdovdOrAU1TZwPJcMIMoNjjbe6 P1lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680834535; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9GxqrDFIK8A2p38p68cfxj81fd2IhJuglmJgqkZLp48=; b=MT+fn0OtybNiDBmqhQQo1znzmtEXy9f/0IUCDrWXyDt6CtduithSddnMBJTIevBCqE AoboSxsnIfjL5cB10aGoW6jUN/GR7OXBgv9woSpvTVhUZd07BAR0RPCmuyQr10uf1YMv DadK0BoCcgRweIxZE8oHlNsq41rosMI0suclNKAppQxngfwtsdBTMrruuNfyy2pLdyXS DULLuAC5Kq04HqUpp01+KZ+3zMKK4bw/gWq+3Pup+Ia7Iiuhc2zM9byqvuZJt22OkK1j HqOvFtv9amc2z0HoLaRy5CmRsP/akzJ7CrRzckB9XefopzYe6AY+gYGgg3a1OB06ll4M Qaxg== X-Gm-Message-State: AAQBX9cpRffhMJsGxox4uFjbD4fANzpZaaTuVrY9nFOe2tK5UIPUE12d H3uwXNA7qkLYZyXg6tQDZlBxGB8pmoRz7E2BPII= X-Google-Smtp-Source: AKy350a5SeIQsKVYJdgtl96+OyCikyg+SeJGV4dzU/E0F9UwWclqV8kp6h3E2oAAZG15Qov0POcu1OW8z0ITckhiIps= X-Received: by 2002:a25:d10c:0:b0:b79:22d7:95ff with SMTP id i12-20020a25d10c000000b00b7922d795ffmr961662ybg.2.1680834535288; Thu, 06 Apr 2023 19:28:55 -0700 (PDT) MIME-Version: 1.0 References: <20230331093937.945725-1-zhangpeng362@huawei.com> <20230331093937.945725-4-zhangpeng362@huawei.com> In-Reply-To: <20230331093937.945725-4-zhangpeng362@huawei.com> From: Vishal Moola Date: Thu, 6 Apr 2023 19:28:44 -0700 Message-ID: Subject: Re: [PATCH v5 3/6] userfaultfd: convert copy_huge_page_from_user() to copy_folio_from_user() To: Peng Zhang Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, willy@infradead.org, mike.kravetz@oracle.com, sidhartha.kumar@oracle.com, muchun.song@linux.dev, wangkefeng.wang@huawei.com, sunnanyong@huawei.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 4A55320002 X-Stat-Signature: t7ddk9pg8undxrb4m6gbsyoeg4prm5ey X-Rspam-User: X-HE-Tag: 1680834536-471515 X-HE-Meta: U2FsdGVkX18+znbRsC6Rbyv/5dDuGVRYU4OOuqUbZENydf9OsjDw5fzi1G8uk4l/ZVzu38TR1frfVU5Cd86HzcSVEKqy2xm15InJyw2kTPvN1vbykmSRqsQbkCt2DJ0eOG4/eOQ0fQLKo/21y8ub+QgtVqbkzchJ43RE5oMFNuM2V+VkBTU0vyx2WgTsJ7UXt3BrIDSHGIYGLr39SN+Zp8soC1UwrfUBm5OzWlEhvu8ysSVmys5uejVn7XeV/peZRv6c7N6uJjUb8cXEkWM+oPL20BA5VG/5Phk0+mi3RZmzQRxJVSpZbxqMzhIDchir8LyuMEaCby0qS/CIUxMDc8duJp0Z15SXXTNq9nwQBLVL3mzjhnugno3JbUfn9Tok+9wgqgr5OQmI0R6i94iadJzPuGyWMjGcWQM7jJWUSKrZkFeyJu5zXHqq081RrwmV7ztb7C9NFSgYCPuYo3iAVSowmJhj3va4+H7s/YsnXcfv8u2TXYiNg3dAyA5fFymJH3dqDOLdoYICjhBGHCD8CC4zSQsXrnw2AKyHYa1IHAZ5vRcXDiFDHL4Rj+b3A4XDbcaoCCKR8jdhsvfNXjnN6ruk43l1GZXrORwJFw4jYILP/Hmfk/pSMeHwzVvX2d+msvGxRsd+m511y5rZcWPzKVHdONc5bsSODo0m1/+5gPYXHlLXXR5vfmCt3nib9YHshLV4ga8lcQQFKwn5WBhE7F45SazO6CRusF0jZ7HLL/aLSNAPA2bLhnLr8jjJlWuHIkyIMsDEaahJ9sH4jpqwpUnDylC6w81fvwYEHYUrKYvWVnMeHCJ8qa64wPwnwI5Ww/T498p2oqwTfZzgNtDyzGDt9F/DL4jgnmo6YSOKPuUY0fQk0HuX4hS0202kF65VDToifnAn7F42tTjUNflS8Eo9THcexkS1dJAtMxyUWvzqPX0Ua/YpYZGMUjg+mJDCvNAEcrOxYPD4FHRV5al zULGRzjd kzlgR4ARky0YQFM1bMJA2DjhuCvm4Xh98TmAYpoN+1tnzqWDCkKpaTfOG4dzQj3AnHSbuh30qrBgqi+yeiYZLCIrwb55FisEVOxRzSlhQnqpjqCM1tQjN7m7RwcDd+LIGMnWL6dbo7dXxo8BA8MV17kmaYRlH2rj90WUYb/gkxBkSlJgbqvDi1OCVk+ZjZmOWt/4Nn4F4g0Bmlt/iODOfyRZlIL7UyMh6vLZsaWCmboEMVI3N4fcGEJtBGN1ot1sxccqK0d+EgtpdaBEWZRGviUXJwRV+ST21RA4g+NbN9loY4bQFQB1M9jDR5yNMf/aLj4B3vfuN1C3a/mt3trCBEuqVdcmR6x1U8l336ox4GO468WxSvbhIk353V/h1lMWHePG4UVYpb5d17PgnZPaqblq14vzwvQ1dehlCYEydIFJRqM6UpWBLpsNV5y192tMKfsWBqlqt38PyKXHX5lX2mshi7hwy958xWjbH3tT9nQMjixQCrlicCOHfyQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Mar 31, 2023 at 2:41=E2=80=AFAM Peng Zhang wrote: > > From: ZhangPeng > > Replace copy_huge_page_from_user() with copy_folio_from_user(). > copy_folio_from_user() does the same as copy_huge_page_from_user(), but > takes in a folio instead of a page. Convert page_kaddr to kaddr in > copy_folio_from_user() to do indenting cleanup. > > Signed-off-by: ZhangPeng > Reviewed-by: Sidhartha Kumar > --- > include/linux/mm.h | 7 +++---- > mm/hugetlb.c | 5 ++--- > mm/memory.c | 26 ++++++++++++-------------- > mm/userfaultfd.c | 6 ++---- > 4 files changed, 19 insertions(+), 25 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index e249208f8fbe..cf4d773ca506 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -3682,10 +3682,9 @@ extern void copy_user_huge_page(struct page *dst, = struct page *src, > unsigned long addr_hint, > struct vm_area_struct *vma, > unsigned int pages_per_huge_page); > -extern long copy_huge_page_from_user(struct page *dst_page, > - const void __user *usr_src, > - unsigned int pages_per_huge_page, > - bool allow_pagefault); > +long copy_folio_from_user(struct folio *dst_folio, > + const void __user *usr_src, > + bool allow_pagefault); > > /** > * vma_is_special_huge - Are transhuge page-table entries considered spe= cial? > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 7e4a80769c9e..aade1b513474 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -6217,9 +6217,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, > goto out; > } > > - ret =3D copy_huge_page_from_user(&folio->page, > - (const void __user *) src= _addr, > - pages_per_huge_page(h), f= alse); > + ret =3D copy_folio_from_user(folio, (const void __user *)= src_addr, > + false); > > /* fallback to copy_from_user outside mmap_lock */ > if (unlikely(ret)) { > diff --git a/mm/memory.c b/mm/memory.c > index 808f354bce65..4976422b6979 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5868,35 +5868,33 @@ void copy_user_huge_page(struct page *dst, struct= page *src, > process_huge_page(addr_hint, pages_per_huge_page, copy_subpage, &= arg); > } > > -long copy_huge_page_from_user(struct page *dst_page, > - const void __user *usr_src, > - unsigned int pages_per_huge_page, > - bool allow_pagefault) > +long copy_folio_from_user(struct folio *dst_folio, > + const void __user *usr_src, > + bool allow_pagefault) > { > - void *page_kaddr; > + void *kaddr; > unsigned long i, rc =3D 0; > - unsigned long ret_val =3D pages_per_huge_page * PAGE_SIZE; > + unsigned int nr_pages =3D folio_nr_pages(dst_folio); > + unsigned long ret_val =3D nr_pages * PAGE_SIZE; > struct page *subpage; > > - for (i =3D 0; i < pages_per_huge_page; i++) { > - subpage =3D nth_page(dst_page, i); > - page_kaddr =3D kmap_local_page(subpage); > + for (i =3D 0; i < nr_pages; i++) { > + subpage =3D folio_page(dst_folio, i); > + kaddr =3D kmap_local_page(subpage); > if (!allow_pagefault) > pagefault_disable(); > - rc =3D copy_from_user(page_kaddr, > - usr_src + i * PAGE_SIZE, PAGE_SIZE); > + rc =3D copy_from_user(kaddr, usr_src + i * PAGE_SIZE, PAG= E_SIZE); > if (!allow_pagefault) > pagefault_enable(); > - kunmap_local(page_kaddr); > + kunmap_local(kaddr); > > ret_val -=3D (PAGE_SIZE - rc); > if (rc) > break; > > - flush_dcache_page(subpage); > - > cond_resched(); > } > + flush_dcache_folio(dst_folio); > return ret_val; > } Moving the flush_dcache_page() outside the loop to be flush_dcache_folio() changes the behavior of the function. Initially, if it fails to copy the entire page, the function breaks out of the loop and returns the number of unwritten bytes without flushing the page from the cache. Now if it fails, it will still flush out the page it failed on, as well as any later pages it may not have gotten to yet.