From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 507B7CD1292 for ; Thu, 4 Apr 2024 21:08:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB4DF6B0088; Thu, 4 Apr 2024 17:08:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D643D6B0098; Thu, 4 Apr 2024 17:08:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BDD666B009B; Thu, 4 Apr 2024 17:08:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id A1B5A6B0088 for ; Thu, 4 Apr 2024 17:08:02 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 67F73C0264 for ; Thu, 4 Apr 2024 21:08:02 +0000 (UTC) X-FDA: 81973086804.02.DA941A7 Received: from mail-yw1-f169.google.com (mail-yw1-f169.google.com [209.85.128.169]) by imf30.hostedemail.com (Postfix) with ESMTP id 8431B80008 for ; Thu, 4 Apr 2024 21:08:00 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=hhcHqfhM; spf=pass (imf30.hostedemail.com: domain of surenb@google.com designates 209.85.128.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712264880; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GEPgxv5jO+a3XFdTbHPF5Q9T9DAeBlFJtTQ1Lr368bE=; b=M+GNwXnssKoPeisUaZYPOrsmBKkM/VKsCVCRnxUAEnhDxT0bMy+x4WN+ARb2Zi7jdwGPG+ w8jZ0z8/dGg8Ka22/eDyvhcvtOgNkkwJhMw9ZzpG57bhIilmZQLQXuxsxsEVKCblAMtM3o WrcbCuH2tzaTlca3M9Q7jhI+YCchH9w= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=hhcHqfhM; spf=pass (imf30.hostedemail.com: domain of surenb@google.com designates 209.85.128.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712264880; a=rsa-sha256; cv=none; b=ClItoEWvGMEvHvKtp7SLBRbm+BSRU1/wlbd6YMpbYUyWgD8poSCOjdOkfv8Uw6dwAvIzfC ya5nGzV07Yaq+a0MBWfRSA7xoTt/U4exeawbO9NYfJuaCqHqrsXzQz3F1JEyZeQ8YHoZ/i qk5OYv2zQf7jbv9EKNct/CL9/Z8vFqU= Received: by mail-yw1-f169.google.com with SMTP id 00721157ae682-614ec7ee902so14224197b3.2 for ; Thu, 04 Apr 2024 14:08:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712264879; x=1712869679; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=GEPgxv5jO+a3XFdTbHPF5Q9T9DAeBlFJtTQ1Lr368bE=; b=hhcHqfhMOxXbd11lGIahHoM6zIxFPlRaSEp6OqYWPxE++tb1gpfUPuYX57p8tGJsxx fBemUvuhcA+Bq1uCExAv4a7wcMKheKJxaZqLIzrIDOxHUuZweHhWDQw9OaPv05pOtGzc aC3XvgfUpdhaO8vxxP2LtFH9d9OP8xdxWVLOstgOrtOpnBg5JXFWhVGCmvKRQGrqyfqi 1Z91BjHdqvis5PV1tjP6x9L871xQJdnMKpSq1ZN2IRF6IZskrkR4efMUJnVOBeoxZF41 zyqlmsF4cOnrePp6PLy86MRdkaU46G+OXakZD6JUvzUoAcm/Ec1FOqJRnhlPZHr3vsHI aaww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712264879; x=1712869679; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GEPgxv5jO+a3XFdTbHPF5Q9T9DAeBlFJtTQ1Lr368bE=; b=dIx7GeqVjuVp32rhzeSkF+ItIr64QngNDmFSvbs7Gc/d75QaO7K+/hBqBMfgMStI+G UH0qkyielHMu8DYo2FhbgOiGKoATaLCnJItp39TxNOdwWc0CqUl4AV6L8LDsqJdfdjsy 1A5ql6IGde2lX0azMLL/+vdVSB/+WLZeiUDlTZ75KphcCgm86B0C3mqO1iEXixke6Hmw qPE5SF+0czIdi9aQTTAGDCozxLN8zUuRsQzgH7FHYhw7LvY0xEfU7ZNEvM/71axfO2Ja e3KZuuT/ZAaboaUkRVS1m+KDZ74oQMZ9U0zETypvw/dJFjgSGIxwJNk+rhdhQoBo+rvT MCKw== X-Forwarded-Encrypted: i=1; AJvYcCUctMbR/fdgaMydZQ22BWpLZxdSe05+t51p1eGCcxlMpy333aahQqsb9c728WrAWka56YlI+/ht5t4EbXdhu0zla2w= X-Gm-Message-State: AOJu0Yyq6I3atN+PdAHogJPHy955udui25KD7dNRWnLyKfB169YnY2yO 8V/ZPF51AwaI3kW/4VnMrQRO33P0Xf8l8KXWEdX1agbjEtoJP2kCVjnNoC3y/S30xT1vvsom1Md jsePwYdUdyOuH9s1EOroP/JgYUwVMW09lrzZTAYxXUAWEOH/UxGQ5 X-Google-Smtp-Source: AGHT+IGeSDcayNFImOyFgb30C9ktisTL2ID44stHNniOVpp+PNE1qvTfexNZELS3w9g9f5CAg8+S1Ah1YxYXd/48pl0= X-Received: by 2002:a25:ea53:0:b0:dca:e4fd:b6d6 with SMTP id o19-20020a25ea53000000b00dcae4fdb6d6mr3005124ybe.61.1712264879294; Thu, 04 Apr 2024 14:07:59 -0700 (PDT) MIME-Version: 1.0 References: <20240404171726.2302435-1-lokeshgidra@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Thu, 4 Apr 2024 14:07:45 -0700 Message-ID: Subject: Re: [PATCH] userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVE To: Peter Xu Cc: Matthew Wilcox , Lokesh Gidra , akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, aarcange@redhat.com, david@redhat.com, zhengqi.arch@bytedance.com, kaleshsingh@google.com, ngeoffray@google.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 8431B80008 X-Rspam-User: X-Stat-Signature: hejnaqfcpms6mg5fogmtu55aykr83tqa X-Rspamd-Server: rspam01 X-HE-Tag: 1712264880-186039 X-HE-Meta: U2FsdGVkX187rJeYomlaxMODmtwWpJs7044Fg0fHhWKfJHeZo/d0LyRm7tNKNQ3qAQsP6f0x6P/fxgz88Nmw6ULc6mx7ycmulpwp76Z/f3fmIU+9ATCpm97Jhp4dyLX7Xn/8woVhIy8BgV3d0phzGvdEoFtD0XiUt7naoco1UtYlUqAx5uYiobGP5jLjhyVxqCk4xK14J42hfQQjN5EI0VCBiIug+sk4VzYLfr8Cqf2IcnTKYPCKkTFVh/UX1VFYLpWuB8g/raP2kDWfFwOKcnMKSYNEMnmlOaoUBA2Z2IZ44vsrDWs6G2PoP3lK8CGgKDg3xHcdxrCQPdstussYNPNA2VYSvBBFuRI7VwEq+B/iI0Ej6XBASf/aGbqikgtpKg8aGQMCx9YrCBuNjx311qd7FXgzS3Pm2NEMkKCKbvCYHS555r0MONdSgL/Ua6Rs0TKel9ljMwqXuQmQ0xEVFeieznWs/138lSQ/91XeDFjYGOuJDmQnzxYXroOJwE07foATfaBaI7KVuwhSp5PQg4xqA5ZnWdYkK4tnkgk6V1k6Pnh0s16jPP2gVA53pE5+coz7Vv01qC7vjav9ES+CTmfGtGBQA4KlAdI/nSI5NpzB9aWG3vSBJfiJbUGJDT3fZ2vIozfQOQU9HAfwN9cVEWOZnHLVQMLGa0us2ih4U8TeFbwBla06wWki41m3W7DXkm/LHSoR/D58u0lOIlMMETmXGGKYU1VpvuheU9yMhRkz2kvhc4fbwGWiDDuwA5x+azgJ7Ej58yS5hbZgdik3XWbYQMlENwTrg2gChYdIy1WhUVuqI3/L6u8GXVyvZuqRGiBV949uXqASt27yjN50M+RdYejPGAuzl+IEE0vfxQeHDjBYDF7+mRbKjnJ+L38qCbbWG00CwchiyF8CJN0wUb8nZQZ3kJB2Hky6I5JDIDt0Ar5oXScRmNeBEMtD1lql1fQyiJDctZ7Vtis55Jg UWUfuEaP YVnZp5lToh7MBrvY5yRCLSxKlbekiF4jlJvRi7/+gpK/GBeULv50ab2N7gV/hTKck8JIPm18IHW/2mv40erV0XPfFOT2GLbOzw/RRiWNx/ZRaWXseahBaaaVRYN/Gq6frEotzc+yoXeyE3num/3YY5W5QdRgFQ33s22jIafy9Qhpz097l5V1Bf1W7WSPtcqYDMf3YXLKw54V7YbZnxUCM1TgKpkfIiBlyqWjVv8cOCeoUbGOV+OCEJGcHiatiXMsDeT56aZIPVZVGzTBA+NKO/xEAtYh6sOMo1avFHu/L0CzOi0FbGgCn+g/a+9b0Por3198v2sb9z8HZOs7wMRXDNy/jqw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000050, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Apr 4, 2024 at 2:04=E2=80=AFPM Peter Xu wrote: > > On Thu, Apr 04, 2024 at 01:55:07PM -0700, Suren Baghdasaryan wrote: > > On Thu, Apr 4, 2024 at 1:32=E2=80=AFPM Peter Xu wro= te: > > > > > > On Thu, Apr 04, 2024 at 06:21:50PM +0100, Matthew Wilcox wrote: > > > > On Thu, Apr 04, 2024 at 10:17:26AM -0700, Lokesh Gidra wrote: > > > > > - folio_move_anon_rmap(src_folio, dst_vma); > > > > > - WRITE_ONCE(src_folio->index, linear_page_index(dst_vm= a, dst_addr)); > > > > > - > > > > > src_pmdval =3D pmdp_huge_clear_flush(src_vma, src_add= r, src_pmd); > > > > > /* Folio got pinned from under us. Put it back and fa= il the move. */ > > > > > if (folio_maybe_dma_pinned(src_folio)) { > > > > > @@ -2270,6 +2267,9 @@ int move_pages_huge_pmd(struct mm_struct *m= m, pmd_t *dst_pmd, pmd_t *src_pmd, pm > > > > > goto unlock_ptls; > > > > > } > > > > > > > > > > + folio_move_anon_rmap(src_folio, dst_vma); > > > > > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vm= a, dst_addr)); > > > > > + > > > > > > > > This use of WRITE_ONCE scares me. We hold the folio locked. Why d= o > > > > we need to use WRITE_ONCE? Who's looking at folio->index without > > > > holding the folio lock? > > > > > > Seems true, but maybe suitable for a separate patch to clean it even = so? > > > We also have the other pte level which has the same WRITE_ONCE(), so = if we > > > want to drop we may want to drop both. > > > > Yes, I'll do that separately and will remove WRITE_ONCE() in both place= s. > > Thanks, Suren. Besides, any comment on below? > > It's definely a generic per-vma question too (besides my willingness to > remove that userfault specific code..), so comments welcomed. Yes, I was typing my reply :) This might have happened simply because lock_vma_under_rcu() was originally developed to handle only anonymous page faults and then got expanded to cover file-backed cases as well. Your suggestion seems fine to me but I would feel much more comfortable after Matthew (who added file-backed support) reviewed it. > > > > > > > > > I just got to start reading some the new move codes (Lokesh, apologie= s on > > > not be able to provide feedbacks previously..), but then I found one = thing > > > unclear, on special handling of private file mappings only in userfau= lt > > > context, and I didn't know why: > > > > > > lock_vma(): > > > if (vma) { > > > /* > > > * lock_vma_under_rcu() only checks anon_vma for priv= ate > > > * anonymous mappings. But we need to ensure it is as= signed in > > > * private file-backed vmas as well. > > > */ > > > if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->an= on_vma)) > > > vma_end_read(vma); > > > else > > > return vma; > > > } > > > > > > AFAIU even for generic users of lock_vma_under_rcu(), anon_vma must b= e > > > stable to be used. Here it's weird to become an userfault specific > > > operation to me. > > > > > > I was surprised how it worked for private file maps on faults, then I= had a > > > check and it seems we postponed such check until vmf_anon_prepare(), = which > > > is the CoW path already, so we do as I expected, but seems unnecessar= y to > > > that point? > > > > > > Would something like below make it much cleaner for us? As I just do= n't > > > yet see why userfault is special here. > > > > > > Thanks, > > > > > > =3D=3D=3D8<=3D=3D=3D > > > diff --git a/mm/memory.c b/mm/memory.c > > > index 984b138f85b4..d5cf1d31c671 100644 > > > --- a/mm/memory.c > > > +++ b/mm/memory.c > > > @@ -3213,10 +3213,8 @@ vm_fault_t vmf_anon_prepare(struct vm_fault *v= mf) > > > > > > if (likely(vma->anon_vma)) > > > return 0; > > > - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { > > > - vma_end_read(vma); > > > - return VM_FAULT_RETRY; > > > - } > > > + /* We shouldn't try a per-vma fault at all if anon_vma isn't = solid */ > > > + WARN_ON_ONCE(vmf->flags & FAULT_FLAG_VMA_LOCK); > > > if (__anon_vma_prepare(vma)) > > > return VM_FAULT_OOM; > > > return 0; > > > @@ -5817,9 +5815,9 @@ struct vm_area_struct *lock_vma_under_rcu(struc= t mm_struct *mm, > > > * find_mergeable_anon_vma uses adjacent vmas which are not l= ocked. > > > * This check must happen after vma_start_read(); otherwise, = a > > > * concurrent mremap() with MREMAP_DONTUNMAP could dissociate= the VMA > > > - * from its anon_vma. > > > + * from its anon_vma. This applies to both anon or private f= ile maps. > > > */ > > > - if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma)) > > > + if (unlikely(!(vma->vm_flags & VM_SHARED) && !vma->anon_vma)) > > > goto inval_end_read; > > > > > > /* Check since vm_start/vm_end might change before we lock th= e VMA */ > > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > > index f6267afe65d1..61f21da77dcd 100644 > > > --- a/mm/userfaultfd.c > > > +++ b/mm/userfaultfd.c > > > @@ -72,17 +72,8 @@ static struct vm_area_struct *lock_vma(struct mm_s= truct *mm, > > > struct vm_area_struct *vma; > > > > > > vma =3D lock_vma_under_rcu(mm, address); > > > - if (vma) { > > > - /* > > > - * lock_vma_under_rcu() only checks anon_vma for priv= ate > > > - * anonymous mappings. But we need to ensure it is as= signed in > > > - * private file-backed vmas as well. > > > - */ > > > - if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->an= on_vma)) > > > - vma_end_read(vma); > > > - else > > > - return vma; > > > - } > > > + if (vma) > > > + return vma; > > > > > > mmap_read_lock(mm); > > > vma =3D find_vma_and_prepare_anon(mm, address); > > > -- > > > 2.44.0 > > > > > > > > > -- > > > Peter Xu > > > > > > > -- > Peter Xu >