From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D42EC433F5 for ; Thu, 23 Sep 2021 02:18:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BB15760F0F for ; Thu, 23 Sep 2021 02:18:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org BB15760F0F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 4FE8C6B006C; Wed, 22 Sep 2021 22:18:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4AC8A900002; Wed, 22 Sep 2021 22:18:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3743D6B0072; Wed, 22 Sep 2021 22:18:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id 24D706B006C for ; Wed, 22 Sep 2021 22:18:37 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D308B30C6E for ; Thu, 23 Sep 2021 02:18:36 +0000 (UTC) X-FDA: 78617229432.34.E140021 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf05.hostedemail.com (Postfix) with ESMTP id 49B4D507CDED for ; Thu, 23 Sep 2021 02:18:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1632363515; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=XJ5YOY9kYO0vrNifgSQN3wFWv9/EYx7EjCuyG5oR3HY=; b=cKbJstKPHhN0H6bq3tol1vW5HJna4099podU3XMOGqEgcTZaGOY/3/dXCiu/qQh+b1SdpO zeLUbu8QArCC89n2+BS4/8ccI9v+aRKll2kgbc+ufPfDmoMHOx17yLL9wnKw3mk8lDUykW 9zjxH1rE/ARrNs81Tm+XxT6DC/PY5kg= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-59--mZlJltmOQa1YpS3TDz_7w-1; Wed, 22 Sep 2021 22:18:34 -0400 X-MC-Unique: -mZlJltmOQa1YpS3TDz_7w-1 Received: by mail-qv1-f70.google.com with SMTP id ci14-20020a056214054e00b0037a75ff56f9so15578406qvb.23 for ; Wed, 22 Sep 2021 19:18:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=2cVpCaYgqZbzYwtdNEOswMkSLXpY9PVp/jLh/csEpFo=; b=OfQ4Q6cz94lqDmojbOxtukOayqXq1GbtVwvSCjsO9R+L/C/c9mjSqKVwzmbnC//YmM zIiBmzsJYS1w9R23X6WXxCkRjW5mmpm+GhN/Eogl40EbvvEvLvWR1A6k0SRpYMD8TwP5 WhLF2V09syB1yuBfVUyNNkksDv4SRJeK5Q/ly3VP+ejdoz2fYagsghmVuMAhieJb3Pww maQdzV3MwbW1QgDz0kBhsNyQ3bjr0OVrJ1RVgoUDWT5wHKLpk/dGGhR3IuwHx9NPjkT0 QGQ4yNV0+S+7B6Bv3b9VG0Qs5m7mrwiLwQuuKnraxCOQFxoxZSkWM13lVn48BNEHBc0h mvyA== X-Gm-Message-State: AOAM531KTXnnRPUIx2PJRUSwwfpSZpuyR2ccp3lcXWcbrDVPdhU6Dt98 7hlsGKYM63AHkFayDa9QQkLD7/ZeBRGHIvXDoG8GdXI5iCmgMyiqH+Yy3W/6ipUN37R3rqWc/Dp ruTUJ+BCwKX0= X-Received: by 2002:a05:620a:2808:: with SMTP id f8mr2571534qkp.462.1632363513704; Wed, 22 Sep 2021 19:18:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzlWKEomkNcSfL1zxNzyEwWsyoxVCE9PwBsK+sraVKUMgitw9v/5atWt01lAjrCJG+b28U41w== X-Received: by 2002:a05:620a:2808:: with SMTP id f8mr2571520qkp.462.1632363513395; Wed, 22 Sep 2021 19:18:33 -0700 (PDT) Received: from xz-m1.local ([2607:fea8:56a2:9100:b17b:78a:bb8a:9b0f]) by smtp.gmail.com with ESMTPSA id j184sm3300321qkd.74.2021.09.22.19.18.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Sep 2021 19:18:32 -0700 (PDT) Date: Wed, 22 Sep 2021 22:18:30 -0400 From: Peter Xu To: Hugh Dickins , Axel Rasmussen Cc: Axel Rasmussen , LKML , Linux MM , Andrew Morton , Andrea Arcangeli , Nadav Amit Subject: Re: [PATCH] mm/khugepaged: Detecting uffd-wp vma more efficiently Message-ID: References: <20210922175156.130228-1-peterx@redhat.com> <24224366-293a-879-95db-f69abcb0cb70@google.com> <472a3497-ba70-ac6b-5828-bc5c4c93e9ab@google.com> MIME-Version: 1.0 In-Reply-To: <472a3497-ba70-ac6b-5828-bc5c4c93e9ab@google.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: multipart/mixed; boundary="hOe13WIugokcaN/t" Content-Disposition: inline Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=cKbJstKP; spf=none (imf05.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: oqmf8xtzfdo4dtuho16qsh7cd71cztot X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 49B4D507CDED X-HE-Tag: 1632363516-809345 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --hOe13WIugokcaN/t Content-Type: text/plain; charset=utf-8 Content-Disposition: inline On Wed, Sep 22, 2021 at 06:22:45PM -0700, Hugh Dickins wrote: > No, I think I misunderstood you before: thanks for re-explaining. > (And Axel's !userfaultfd_minor() check before calling do_fault_around() > plays an important part in making sure that it does reach shmem_fault().) Still thanks for confirming this, Hugh. Said that, Axel, I didn't mean I'm against doing something similar like uffd-wp; it's just a heads-up that maybe you won't find a reproducer with real issues with minor mode. Even if I think minor mode should be fine with current code, we could still choose to disable khugepaged from removing the pmd for VM_UFFD_MINOR vmas, just like what we'll do with VM_UFFD_WP. At least it can still reduce false positives. So far in my local branch I queued the patch which I attached, that's required for uffd-wp shmem afaict. If you think minor mode would like that too, I can post it separately with minor mode added in. Note that it's slightly different from what I pasted in reply to Yang Shi - I made it slightly more complicated just to make sure there's no race. I mentioned the possible race (I think) in the commit log. Let me know your preference. Thanks, -- Peter Xu --hOe13WIugokcaN/t Content-Type: text/plain; charset=utf-8 Content-Disposition: attachment; filename=patch >From 989d36914ac144177e17f9aacbf2785bb8f21420 Mon Sep 17 00:00:00 2001 From: Peter Xu Date: Wed, 22 Sep 2021 16:23:33 -0400 Subject: [PATCH] mm/khugepaged: Don't recycle vma pgtable if uffd-wp registered When we're trying to collapse a 2M huge shmem page, don't retract pgtable pmd page if it's registered with uffd-wp, because that pgtable could have pte markers installed. Recycling of that pgtable means we'll lose the pte markers. That could cause data loss for an uffd-wp enabled application on shmem. Instead of disabling khugepaged on these files, simply skip retracting these special VMAs, then the page cache can still be merged into a huge thp, and other mm/vma can still map the range of file with a huge thp when proper. Note that checking VM_UFFD_WP needs to be done with mmap_sem held for write, that avoids race like: khugepaged user thread ========== =========== check VM_UFFD_WP, not set UFFDIO_REGISTER with uffd-wp on shmem wr-protect some pages (install markers) take mmap_sem write lock erase pmd and free pmd page --> pte markers are dropped unnoticed! Signed-off-by: Peter Xu --- mm/khugepaged.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 045cc579f724..23e1d03156b3 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1451,6 +1451,10 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE)) return; + /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */ + if (userfaultfd_wp(vma)) + return; + hpage = find_lock_page(vma->vm_file->f_mapping, linear_page_index(vma, haddr)); if (!hpage) @@ -1591,7 +1595,15 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) * reverse order. Trylock is a way to avoid deadlock. */ if (mmap_write_trylock(mm)) { - if (!khugepaged_test_exit(mm)) { + /* + * When a vma is registered with uffd-wp, we can't + * recycle the pmd pgtable because there can be pte + * markers installed. Skip it only, so the rest mm/vma + * can still have the same file mapped hugely, however + * it'll always mapped in small page size for uffd-wp + * registered ranges. + */ + if (!khugepaged_test_exit(mm) && !userfaultfd_wp(vma)) { spinlock_t *ptl = pmd_lock(mm, pmd); /* assume page table is clear */ _pmd = pmdp_collapse_flush(vma, addr, pmd); -- 2.31.1 --hOe13WIugokcaN/t--