From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3EACECAAD5 for ; Mon, 5 Sep 2022 14:40:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4655D801E8; Mon, 5 Sep 2022 10:40:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3EE53801E6; Mon, 5 Sep 2022 10:40:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24042801E8; Mon, 5 Sep 2022 10:40:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0E080801E6 for ; Mon, 5 Sep 2022 10:40:13 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CF889120947 for ; Mon, 5 Sep 2022 14:40:12 +0000 (UTC) X-FDA: 79878291864.08.E721BDF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf06.hostedemail.com (Postfix) with ESMTP id 570A3180061 for ; Mon, 5 Sep 2022 14:40:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662388811; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pMrUiWoeKOlNrXYiXtJEGKKB4GLuGljGkTeH1olJdtw=; b=KWV5JSBdGdJdXw8qnPOv2b4h9v69NLXskzcZxo8FNL4aBiqi/uLQKUcbsT0n/CGq870fYH IX/wQPBKcJLSd0h5tSlO4HO5gvP/JU/B0e0kElvNe6rzIiogrSJWNzOIzyeMF7EdKs7kmO +5a0YOFYhA6pAQFO0iJupw20XFrIUsI= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-83-jWA0MbAmObO1jFBJlWwI1w-1; Mon, 05 Sep 2022 10:40:08 -0400 X-MC-Unique: jWA0MbAmObO1jFBJlWwI1w-1 Received: by mail-wm1-f69.google.com with SMTP id 203-20020a1c02d4000000b003a5f5bce876so7541477wmc.2 for ; Mon, 05 Sep 2022 07:40:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date; bh=pMrUiWoeKOlNrXYiXtJEGKKB4GLuGljGkTeH1olJdtw=; b=OB/8CqbGrY27rXxCp50oy2c9LlykaWAxi7wqkPpdFpHgoooqpn71P3OzejdwPiIdIk amFRDhC5Z8w5kh2MJsYmTkLQOdtqKYwFytxoxEPw4VKB+QW9FEPFTq6Vjo1SamJ1g/uo AXy8lJHlNJT20Mj9PcGTFQpEhJnHW92HVSQaBQQWDutcGKo0J8N9exD8TbA5AJZ+Y17E qOuIwjJbkyVTdsihm2t1YQIvl4A0V35mYq0J2cMIY1dKeoT6AUWoH1Se8un6DWlEenfn gYkRvITHOtDsgLe2JiS0uw+egGg7HaMLV5sM50v+MUdEZSjxN4VCqVzcABeS0mwdrO6w gx4Q== X-Gm-Message-State: ACgBeo2n9cLdf8nMfv0/gHQjhBeFsOND8s8n228po5yFjGbw241GRmYk 9Jzaw9GwrJ2+mfhM95NjAiWrgok6Vi6VyBt/RoXAjnP7P5zkz37me/J0cWSwb2WZdQEJaNKt3jk UJHngZ01QTec= X-Received: by 2002:a05:6000:795:b0:226:d45a:ffe5 with SMTP id bu21-20020a056000079500b00226d45affe5mr23983282wrb.33.1662388807626; Mon, 05 Sep 2022 07:40:07 -0700 (PDT) X-Google-Smtp-Source: AA6agR5dn0k6QYRWIIH7HDpYb4MyMaSTZLAkFGn8vbwSfMFZ7eNBwQq6UwIbMZYbJVEWty2/gk2nfA== X-Received: by 2002:a05:6000:795:b0:226:d45a:ffe5 with SMTP id bu21-20020a056000079500b00226d45affe5mr23983260wrb.33.1662388807307; Mon, 05 Sep 2022 07:40:07 -0700 (PDT) Received: from ?IPV6:2003:d8:2f0d:ba00:c951:31d7:b2b0:8ba0? (p200300d82f0dba00c95131d7b2b08ba0.dip0.t-ipconnect.de. [2003:d8:2f0d:ba00:c951:31d7:b2b0:8ba0]) by smtp.gmail.com with ESMTPSA id g11-20020a05600c310b00b003a5ea1cc63csm18152971wmo.39.2022.09.05.07.40.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 05 Sep 2022 07:40:06 -0700 (PDT) Message-ID: <27c814a5-03b1-9745-b7bb-c877adc0b810@redhat.com> Date: Mon, 5 Sep 2022 16:40:05 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.2.0 Subject: Re: [PATCH] mm: gup: fix the fast GUP race against THP collapse To: Baolin Wang , John Hubbard , Yang Shi , peterx@redhat.com, kirill.shutemov@linux.intel.com, jgg@nvidia.com, hughd@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20220901222707.477402-1-shy828301@gmail.com> <0c9d9774-77dd-fd93-b5b6-fc63f3d01b7f@linux.alibaba.com> <383fec21-9801-9b60-7570-856da2133ea9@redhat.com> <9f098ff0-26d7-477c-13fa-cb878981e1ac@linux.alibaba.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: <9f098ff0-26d7-477c-13fa-cb878981e1ac@linux.alibaba.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662388812; a=rsa-sha256; cv=none; b=enARdk7qE7/PLqVM5u+T3w4gCtTOq9bvZN0lVWe2HARiVEwFjA/vSMBxFneXVYE58u11cN G8v6zDMLod1RCKiyozwLA0XQqZbNkfxxUTzvG6rSEti/5IKmp/4WjfEpMy2rcD7DI9R6De 59/A8UQRPuvvxsKzY59+VGFo5jWOQFU= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KWV5JSBd; spf=pass (imf06.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662388812; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pMrUiWoeKOlNrXYiXtJEGKKB4GLuGljGkTeH1olJdtw=; b=hhLvUydpw74HMrULWhwM/6hBOVhtZDqNs2tyx+XRIdexhrX1lQQ5hr4Pwm2QIr+cd56JNp HFT+ZUHQfooUAIVaZpRVIdZ2J16NifS9aYNAmTv6t2XNkH4g7e54UEcyX3fx+VLMBY/w5t oQcnqzqsvm45MUmwrUp975p4TYag568= X-Stat-Signature: h4reiyi38zfo8ffnbdxj8jne5xhrd3fn X-Rspamd-Queue-Id: 570A3180061 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KWV5JSBd; spf=pass (imf06.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-Rspamd-Server: rspam03 X-HE-Tag: 1662388812-449600 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 05.09.22 16:35, Baolin Wang wrote: > > > On 9/5/2022 7:11 PM, David Hildenbrand wrote: >> On 05.09.22 12:24, David Hildenbrand wrote: >>> On 05.09.22 12:16, Baolin Wang wrote: >>>> >>>> >>>> On 9/5/2022 3:59 PM, David Hildenbrand wrote: >>>>> On 05.09.22 00:29, John Hubbard wrote: >>>>>> On 9/1/22 15:27, Yang Shi wrote: >>>>>>> Since general RCU GUP fast was introduced in commit 2667f50e8b81 >>>>>>> ("mm: >>>>>>> introduce a general RCU get_user_pages_fast()"), a TLB flush is no >>>>>>> longer >>>>>>> sufficient to handle concurrent GUP-fast in all cases, it only >>>>>>> handles >>>>>>> traditional IPI-based GUP-fast correctly.  On architectures that send >>>>>>> an IPI broadcast on TLB flush, it works as expected.  But on the >>>>>>> architectures that do not use IPI to broadcast TLB flush, it may have >>>>>>> the below race: >>>>>>> >>>>>>>       CPU A                                          CPU B >>>>>>> THP collapse                                     fast GUP >>>>>>>                                                  gup_pmd_range() <-- >>>>>>> see valid pmd >>>>>>>                                                      gup_pte_range() >>>>>>> <-- work on pte >>>>>>> pmdp_collapse_flush() <-- clear pmd and flush >>>>>>> __collapse_huge_page_isolate() >>>>>>>        check page pinned <-- before GUP bump refcount >>>>>>>                                                          pin the page >>>>>>>                                                          check PTE >>>>>>> <-- >>>>>>> no change >>>>>>> __collapse_huge_page_copy() >>>>>>>        copy data to huge page >>>>>>>        ptep_clear() >>>>>>> install huge pmd for the huge page >>>>>>>                                                          return the >>>>>>> stale page >>>>>>> discard the stale page >>>>>> >>>>>> Hi Yang, >>>>>> >>>>>> Thanks for taking the trouble to write down these notes. I always >>>>>> forget which race we are dealing with, and this is a great help. :) >>>>>> >>>>>> More... >>>>>> >>>>>>> >>>>>>> The race could be fixed by checking whether PMD is changed or not >>>>>>> after >>>>>>> taking the page pin in fast GUP, just like what it does for PTE. >>>>>>> If the >>>>>>> PMD is changed it means there may be parallel THP collapse, so GUP >>>>>>> should back off. >>>>>>> >>>>>>> Also update the stale comment about serializing against fast GUP in >>>>>>> khugepaged. >>>>>>> >>>>>>> Fixes: 2667f50e8b81 ("mm: introduce a general RCU >>>>>>> get_user_pages_fast()") >>>>>>> Signed-off-by: Yang Shi >>>>>>> --- >>>>>>>     mm/gup.c        | 30 ++++++++++++++++++++++++------ >>>>>>>     mm/khugepaged.c | 10 ++++++---- >>>>>>>     2 files changed, 30 insertions(+), 10 deletions(-) >>>>>>> >>>>>>> diff --git a/mm/gup.c b/mm/gup.c >>>>>>> index f3fc1f08d90c..4365b2811269 100644 >>>>>>> --- a/mm/gup.c >>>>>>> +++ b/mm/gup.c >>>>>>> @@ -2380,8 +2380,9 @@ static void __maybe_unused undo_dev_pagemap(int >>>>>>> *nr, int nr_start, >>>>>>>     } >>>>>>>     #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL >>>>>>> -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned >>>>>>> long end, >>>>>>> -             unsigned int flags, struct page **pages, int *nr) >>>>>>> +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, >>>>>>> +             unsigned long end, unsigned int flags, >>>>>>> +             struct page **pages, int *nr) >>>>>>>     { >>>>>>>         struct dev_pagemap *pgmap = NULL; >>>>>>>         int nr_start = *nr, ret = 0; >>>>>>> @@ -2423,7 +2424,23 @@ static int gup_pte_range(pmd_t pmd, unsigned >>>>>>> long addr, unsigned long end, >>>>>>>                 goto pte_unmap; >>>>>>>             } >>>>>>> -        if (unlikely(pte_val(pte) != pte_val(*ptep))) { >>>>>>> +        /* >>>>>>> +         * THP collapse conceptually does: >>>>>>> +         *   1. Clear and flush PMD >>>>>>> +         *   2. Check the base page refcount >>>>>>> +         *   3. Copy data to huge page >>>>>>> +         *   4. Clear PTE >>>>>>> +         *   5. Discard the base page >>>>>>> +         * >>>>>>> +         * So fast GUP may race with THP collapse then pin and >>>>>>> +         * return an old page since TLB flush is no longer >>>>>>> sufficient >>>>>>> +         * to serialize against fast GUP. >>>>>>> +         * >>>>>>> +         * Check PMD, if it is changed just back off since it >>>>>>> +         * means there may be parallel THP collapse. >>>>>>> +         */ >>>>>> >>>>>> As I mentioned in the other thread, it would be a nice touch to move >>>>>> such discussion into the comment header. >>>>>> >>>>>>> +        if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) || >>>>>>> +            unlikely(pte_val(pte) != pte_val(*ptep))) { >>>>>> >>>>>> >>>>>> That should be READ_ONCE() for the *pmdp and *ptep reads. Because this >>>>>> whole lockless house of cards may fall apart if we try reading the >>>>>> page table values without READ_ONCE(). >>>>> >>>>> I came to the conclusion that the implicit memory barrier when grabbing >>>>> a reference on the page is sufficient such that we don't need READ_ONCE >>>>> here. >>>> >>>> IMHO the compiler may optimize the code 'pte_val(*ptep)' to be always >>>> get from a register, then we can get an old value if other thread did >>>> set_pte(). I am not sure how the implicit memory barrier can pervent the >>>> compiler optimization? Please correct me if I missed something. >>> >>> IIUC, an memory barrier always implies a compiler barrier. >>> >> >> To clarify what I mean, Documentation/atomic_t.txt documents >> >> NOTE: when the atomic RmW ops are fully ordered, they should also imply >> a compiler barrier. > > Right, I agree. That means the complier can not optimize the order of > the 'pte_val(*ptep)', however what I am confusing is that the complier > can still save the value of *ptep into a register or stack instead of > reloading from memory? After the memory+compiler barrier, the value has to be reloaded. Documentation/memory-barriers.txt documents under "COMPILER BARRIERS": "READ_ONCE() and WRITE_ONCE() can be thought of as weak forms of barrier() that affect only the specific accesses flagged by the READ_ONCE() or WRITE_ONCE()." Consequently, if there already is a compile barrier, additional READ_ONCE/WRITE_ONCE isn't required. > > A similar issue in commit d6c1f098f2a7 ("mm/swap_state: fix a data race > in swapin_nr_pages"). > > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -509,10 +509,11 @@ static unsigned long swapin_nr_pages(unsigned long > offset) > return 1; > > hits = atomic_xchg(&swapin_readahead_hits, 0); > - pages = __swapin_nr_pages(prev_offset, offset, hits, max_pages, > + pages = __swapin_nr_pages(READ_ONCE(prev_offset), offset, hits, > + max_pages, > atomic_read(&last_readahead_pages)); > if (!hits) > - prev_offset = offset; > + WRITE_ONCE(prev_offset, offset); > atomic_set(&last_readahead_pages, pages); > > return pages; > IIUC the difference here is that there is not other implicit memory+compile barrier in between. -- Thanks, David / dhildenb