From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D5B5ECAAD5 for ; Fri, 2 Sep 2022 16:00:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E5ACD8D0027; Fri, 2 Sep 2022 12:00:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E09578D0014; Fri, 2 Sep 2022 12:00:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAAA88D0027; Fri, 2 Sep 2022 12:00:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B496A8D0014 for ; Fri, 2 Sep 2022 12:00:03 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7CC02161238 for ; Fri, 2 Sep 2022 16:00:03 +0000 (UTC) X-FDA: 79867606686.13.3FFCACE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf29.hostedemail.com (Postfix) with ESMTP id 23FBF12008D for ; Fri, 2 Sep 2022 16:00:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662134401; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=KHRvRVn7dZqjAbla/kU+RaIkGJZYLm02v4kWZjDrnLI=; b=UGY21I2Wga3AJRuABD+ZBWdI13bLTrmjaOQ2kmKfELepl0j0ikGA0xPH2eZaK4hSljVN+E boQf9Cf66sUyzWLOcCOeDdLzdsQJIkbzvjfps2lycKIXEvgu9d+WL2KMxaPnckQ35NgF/t 5KJDD5zQfYP5aeoqqxlfabtzejHk5XE= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-396-Uk7xJ-0iMQOM6nsI-StkAw-1; Fri, 02 Sep 2022 11:59:59 -0400 X-MC-Unique: Uk7xJ-0iMQOM6nsI-StkAw-1 Received: by mail-qk1-f200.google.com with SMTP id j13-20020a05620a288d00b006be7b2a758fso2256694qkp.1 for ; Fri, 02 Sep 2022 08:59:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date; bh=KHRvRVn7dZqjAbla/kU+RaIkGJZYLm02v4kWZjDrnLI=; b=pN7ZOkNrIGV4PY7lyToZVVnw0js6QA4iIC+2EqRTjRqDW2JwkDNy65mp498Nd6BDCd j8i8nk7GwxWPGj4dC0s2O+W7LA+I0UmL3E6hke2M14xN6XRckfs3xKwW/IIDUCi2xjHI nsKg49qB+35TLRHPwicYeETuWqYsnGtKCrmWcaosxiHLDo8CEJGItEBmQAVPmVAsmjcS hiTf0ya3tZ7RfatxPFVt+/oP9W1UwNAVSUnyStbrTrNoCXWvFvGo7Kauh0qa7OqidQQV nEMzqc+VUv668Nuz+Frv2pD1uc3vBwqLom2aNUWYjtUWceFZJynEIQv84ofKkGwnUl+g Hx9w== X-Gm-Message-State: ACgBeo2c0TYo/pWsiPhrQSQAJZ3+10h4f7et1KrIAYlPk6vcT7eXaj7o E8rzcPe1UkdSErhXsEMDE9o045WV+xAK1tGUOcZgl8FmsXxhOldb2CAb88ZELWAmO01CIN3mIbw vVEuAb1dsvFw= X-Received: by 2002:a05:622a:b:b0:342:f3de:e055 with SMTP id x11-20020a05622a000b00b00342f3dee055mr28058442qtw.43.1662134399267; Fri, 02 Sep 2022 08:59:59 -0700 (PDT) X-Google-Smtp-Source: AA6agR5FwynZV/l6HUUKHni2VchqXiZk/C19dRbVXxKxUHeIsy3JumRGK+6Dru7sYn81VAiuy2cnJg== X-Received: by 2002:a05:622a:b:b0:342:f3de:e055 with SMTP id x11-20020a05622a000b00b00342f3dee055mr28058426qtw.43.1662134398993; Fri, 02 Sep 2022 08:59:58 -0700 (PDT) Received: from xz-m1.local (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id x9-20020a05620a448900b006b5e296452csm1757651qkp.54.2022.09.02.08.59.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Sep 2022 08:59:57 -0700 (PDT) Date: Fri, 2 Sep 2022 11:59:56 -0400 From: Peter Xu To: Yang Shi Cc: david@redhat.com, kirill.shutemov@linux.intel.com, jhubbard@nvidia.com, jgg@nvidia.com, hughd@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm: gup: fix the fast GUP race against THP collapse Message-ID: References: <20220901222707.477402-1-shy828301@gmail.com> MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662134403; a=rsa-sha256; cv=none; b=AKhCjLJ/lfCJ824j7/2dE4WB6977I4AHa9CUpXBiWcrjXaulJdzrnxg+Pw+cmqWPZHcIbO 77IZS5gybK/w8M8DjcmrTGmj+ATeuG7dLtXPtzybW0DV1QriF/NZDHyFsuiNZ9bJ0zC2pB oqvU/g6FlRDl06BBvNZ6qSwqTheuFe8= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UGY21I2W; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf29.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662134403; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KHRvRVn7dZqjAbla/kU+RaIkGJZYLm02v4kWZjDrnLI=; b=teKZr75cFkQ3+p3BiPz2/ZIDzKCIx6Y+pMUF6JxOpFbxu9wJLBO0WkJ+XeDICygl0pFML6 AKft887uAfj2gwZZrgvetiAP/ZMgMQn/eiAMRyNDmv0MHF5rlky7Bv+xAFLcU/bjj0Vxqv zxjMMv+2Ecks/qEq2K14y8MXY/UBW8U= X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 23FBF12008D Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UGY21I2W; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf29.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com X-Rspam-User: X-Stat-Signature: 3h4bjeit6m5jstny5hajo3ic9h7d4qxu X-HE-Tag: 1662134401-556989 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Sep 01, 2022 at 04:50:45PM -0700, Yang Shi wrote: > On Thu, Sep 1, 2022 at 4:26 PM Peter Xu wrote: > > > > Hi, Yang, > > > > On Thu, Sep 01, 2022 at 03:27:07PM -0700, Yang Shi wrote: > > > Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm: > > > introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer > > > sufficient to handle concurrent GUP-fast in all cases, it only handles > > > traditional IPI-based GUP-fast correctly. > > > > If TLB flush (or, IPI broadcasts) used to work to protect against gup-fast, > > I'm kind of confused why it's not sufficient even if with RCU gup? Isn't > > that'll keep working as long as interrupt disabled (which current fast-gup > > will still do)? > > Actually the wording was copied from David's commit log for his > PageAnonExclusive fix. My understanding is the IPI broadcast still > works, but it may not be supported by all architectures and not > preferred anymore. So we should avoid depending on IPI broadcast IIUC. > > > > > IIUC the issue is you suspect not all archs correctly implemented > > pmdp_collapse_flush(), or am I wrong? > > This is a possible fix, please see below for details. > > > > > > On architectures that send > > > an IPI broadcast on TLB flush, it works as expected. But on the > > > architectures that do not use IPI to broadcast TLB flush, it may have > > > the below race: > > > > > > CPU A CPU B > > > THP collapse fast GUP > > > gup_pmd_range() <-- see valid pmd > > > gup_pte_range() <-- work on pte > > > pmdp_collapse_flush() <-- clear pmd and flush > > > __collapse_huge_page_isolate() > > > check page pinned <-- before GUP bump refcount > > > pin the page > > > check PTE <-- no change > > > __collapse_huge_page_copy() > > > copy data to huge page > > > ptep_clear() > > > install huge pmd for the huge page > > > return the stale page > > > discard the stale page > > > > > > The race could be fixed by checking whether PMD is changed or not after > > > taking the page pin in fast GUP, just like what it does for PTE. If the > > > PMD is changed it means there may be parallel THP collapse, so GUP > > > should back off. > > > > Could the race also be fixed by impl pmdp_collapse_flush() correctly for > > the archs that are missing? Do you know which arch(s) is broken with it? > > Yes, and this was suggested by me in the first place, but per the > suggestion from John and David, this is not the preferred way. I think > it is because: > > Firstly, using IPI to serialize against fast GUP is not recommended > anymore since fast GUP does check PTE then back off so we should avoid > it. > Secondly, if checking PMD then backing off could solve the problem, > why do we still need broadcast IPI? It doesn't sound performant. > > > > > It's just not clear to me whether this patch is an optimization or a fix, > > if it's a fix whether the IPI broadcast in ppc pmdp_collapse_flush() would > > still be needed. > > It is a fix and the fix will make IPI broadcast not useful anymore. How about another patch to remove the ppc impl too? Then it can be a two patches series. So that ppc developers can be copied and maybe it helps to have the ppc people looking at current approach too. Then the last piece of it is the s390 pmdp_collapse_flush(). I'm wondering whether generic pmdp_collapse_flush() would be good enough, since the only addition comparing to the s390 one will be flush_tlb_range() (which is a further __tlb_flush_mm_lazy). David may have some thoughts. The patch itself looks good to me, one trivial nit below. > > > > > Thanks, > > > > > > > > Also update the stale comment about serializing against fast GUP in > > > khugepaged. > > > > > > Fixes: 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()") > > > Signed-off-by: Yang Shi > > > --- > > > mm/gup.c | 30 ++++++++++++++++++++++++------ > > > mm/khugepaged.c | 10 ++++++---- > > > 2 files changed, 30 insertions(+), 10 deletions(-) > > > > > > diff --git a/mm/gup.c b/mm/gup.c > > > index f3fc1f08d90c..4365b2811269 100644 > > > --- a/mm/gup.c > > > +++ b/mm/gup.c > > > @@ -2380,8 +2380,9 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start, > > > } > > > > > > #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL > > > -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > > > - unsigned int flags, struct page **pages, int *nr) > > > +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, > > > + unsigned long end, unsigned int flags, > > > + struct page **pages, int *nr) > > > { > > > struct dev_pagemap *pgmap = NULL; > > > int nr_start = *nr, ret = 0; > > > @@ -2423,7 +2424,23 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > > > goto pte_unmap; > > > } > > > > > > - if (unlikely(pte_val(pte) != pte_val(*ptep))) { > > > + /* > > > + * THP collapse conceptually does: > > > + * 1. Clear and flush PMD > > > + * 2. Check the base page refcount > > > + * 3. Copy data to huge page > > > + * 4. Clear PTE > > > + * 5. Discard the base page > > > + * > > > + * So fast GUP may race with THP collapse then pin and > > > + * return an old page since TLB flush is no longer sufficient > > > + * to serialize against fast GUP. > > > + * > > > + * Check PMD, if it is changed just back off since it > > > + * means there may be parallel THP collapse. Would you mind rewording this comment a bit? I feel it a bit weird to suddenly mention about thp collapse especially its details. Maybe some statement on the whole history of why check pte, and in what case pmd check is needed (where the thp collapse example can be moved to, imho)? One of my attempt for reference.. /* * Fast-gup relies on pte change detection to avoid * concurrent pgtable operations. * * To pin the page, fast-gup needs to do below in order: * (1) pin the page (by prefetching pte), then (2) check * pte not changed. * * For the rest of pgtable operations where pgtable updates * can be racy with fast-gup, we need to do (1) clear pte, * then (2) check whether page is pinned. * * Above will work for all pte-level operations, including * thp split. * * For thp collapse, it's a bit more complicated because * with RCU pgtable free fast-gup can be walking a pgtable * page that is being freed (so pte is still valid but pmd * can be cleared already). To avoid race in such * condition, we need to also check pmd here to make sure * pmd doesn't change (corresponds to pmdp_collapse_flush() * in the thp collide code path). */ If you agree with the comment change, feel free to add: Acked-by: Peter Xu Thanks, -- Peter Xu