From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46428C2D0A8 for ; Wed, 23 Sep 2020 16:10:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B6ADF206B8 for ; Wed, 23 Sep 2020 16:10:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bjbY42xM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B6ADF206B8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 157306B0071; Wed, 23 Sep 2020 12:10:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E01E6B0072; Wed, 23 Sep 2020 12:10:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE9C96B0074; Wed, 23 Sep 2020 12:10:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0099.hostedemail.com [216.40.44.99]) by kanga.kvack.org (Postfix) with ESMTP id CCE046B0071 for ; Wed, 23 Sep 2020 12:10:36 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8B5EC180C7046 for ; Wed, 23 Sep 2020 16:10:36 +0000 (UTC) X-FDA: 77294814072.25.sleep45_5e1098427157 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id BE517185629AD for ; Wed, 23 Sep 2020 15:24:16 +0000 (UTC) X-HE-Tag: sleep45_5e1098427157 X-Filterd-Recvd-Size: 7656 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Wed, 23 Sep 2020 15:24:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600874655; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=THih3q8rN8IN69lQhJAECn7c4JXGJG1JCZBvWz4oPwQ=; b=bjbY42xMn19OqP8kqgsV2sqI4LvfVnUgLCoehrgBsFK1rY/jeusZ+2rGwB/YdbObpplRfU uJVmo6GAaA0qZXoYc4lY+AhfBrl3R8BmFyLE9jygP7eT8CkdFZw9f22sIrJ7bcJ5VIjzat oUj5IEnV76H1jnn8zfcjL5VjyWXcX1I= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-451-SeF3UHmZMwyxQQv45LE5Yg-1; Wed, 23 Sep 2020 11:24:12 -0400 X-MC-Unique: SeF3UHmZMwyxQQv45LE5Yg-1 Received: by mail-qt1-f199.google.com with SMTP id r22so19610qtc.9 for ; Wed, 23 Sep 2020 08:24:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=THih3q8rN8IN69lQhJAECn7c4JXGJG1JCZBvWz4oPwQ=; b=p/TLTqHfbyS5ZMtqQpKHVJvIPzyyOXolEnAnF1be7TW5j2Sp4oz1vfsAl1k3la5dRZ 3RuaD0m4Br7sLzL3K6/9goHIt2NOqpMOSEfNK6aP0FeLlnsNIv/khcau+S8TG1yI7FIl JREsbakpGW1REPVFZ28Ppn5LZfzANMW39SmOXZXjH0I1Y2FNw8oknT9u7Z5u+fSduTSj SDETSxP5Ppu/S61ef5YizbpMEp28Tz8/h5ApdegrXXbsLOhv6WMK0brysJbX9TWSurQ+ g5cMQJfUgf3WqMRuEbadExUbd+060u1ZLhnMm7Bw8EVaQYyzxM8J40Ccj3NwqrAlFH49 kTow== X-Gm-Message-State: AOAM532uE8iWo8k5/zvGEAJ4GmGmIlvg7y4PzxuaYQgkqjgy5jFRJKSb 7fJIt2v1EGYIh8yJgqJr8VkoOpcW1PSC69lhIebl/bwzzOqLJkcbmAn2sQhvhEOaIQ8AutdHSZn oB3tXFSMiAS0= X-Received: by 2002:aed:2414:: with SMTP id r20mr576080qtc.304.1600874652035; Wed, 23 Sep 2020 08:24:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyS0r92AN+fmomEBf3kneVEJMKrgIwaeAWIUoH+Uj/5g7AMC/zJGvS5RfjZMt+cw4U3k/bCLA== X-Received: by 2002:aed:2414:: with SMTP id r20mr576035qtc.304.1600874651662; Wed, 23 Sep 2020 08:24:11 -0700 (PDT) Received: from xz-x1 (bras-vprn-toroon474qw-lp130-11-70-53-122-15.dsl.bell.ca. [70.53.122.15]) by smtp.gmail.com with ESMTPSA id g4sm50423qth.30.2020.09.23.08.24.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Sep 2020 08:24:10 -0700 (PDT) Date: Wed, 23 Sep 2020 11:24:09 -0400 From: Peter Xu To: Jason Gunthorpe Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Linus Torvalds , Michal Hocko , Kirill Shutemov , Jann Horn , Oleg Nesterov , Kirill Tkhai , Hugh Dickins , Leon Romanovsky , Jan Kara , John Hubbard , Christoph Hellwig , Andrew Morton , Andrea Arcangeli Subject: Re: [PATCH 5/5] mm/thp: Split huge pmds/puds if they're pinned when fork() Message-ID: <20200923152409.GC59978@xz-x1> References: <20200921211744.24758-1-peterx@redhat.com> <20200921212031.25233-1-peterx@redhat.com> <20200922120505.GH8409@ziepe.ca> MIME-Version: 1.0 In-Reply-To: <20200922120505.GH8409@ziepe.ca> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Sep 22, 2020 at 09:05:05AM -0300, Jason Gunthorpe wrote: > On Mon, Sep 21, 2020 at 05:20:31PM -0400, Peter Xu wrote: > > Pinned pages shouldn't be write-protected when fork() happens, because follow > > up copy-on-write on these pages could cause the pinned pages to be replaced by > > random newly allocated pages. > > > > For huge PMDs, we split the huge pmd if pinning is detected. So that future > > handling will be done by the PTE level (with our latest changes, each of the > > small pages will be copied). We can achieve this by let copy_huge_pmd() return > > -EAGAIN for pinned pages, so that we'll fallthrough in copy_pmd_range() and > > finally land the next copy_pte_range() call. > > > > Huge PUDs will be even more special - so far it does not support anonymous > > pages. But it can actually be done the same as the huge PMDs even if the split > > huge PUDs means to erase the PUD entries. It'll guarantee the follow up fault > > ins will remap the same pages in either parent/child later. > > > > This might not be the most efficient way, but it should be easy and clean > > enough. It should be fine, since we're tackling with a very rare case just to > > make sure userspaces that pinned some thps will still work even without > > MADV_DONTFORK and after they fork()ed. > > > > Signed-off-by: Peter Xu > > mm/huge_memory.c | 26 ++++++++++++++++++++++++++ > > 1 file changed, 26 insertions(+) > > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index 7ff29cc3d55c..c40aac0ad87e 100644 > > +++ b/mm/huge_memory.c > > @@ -1074,6 +1074,23 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, > > > > src_page = pmd_page(pmd); > > VM_BUG_ON_PAGE(!PageHead(src_page), src_page); > > + > > + /* > > + * If this page is a potentially pinned page, split and retry the fault > > + * with smaller page size. Normally this should not happen because the > > + * userspace should use MADV_DONTFORK upon pinned regions. This is a > > + * best effort that the pinned pages won't be replaced by another > > + * random page during the coming copy-on-write. > > + */ > > + if (unlikely(READ_ONCE(src_mm->has_pinned) && > > + page_maybe_dma_pinned(src_page))) { > > + pte_free(dst_mm, pgtable); > > + spin_unlock(src_ptl); > > + spin_unlock(dst_ptl); > > + __split_huge_pmd(vma, src_pmd, addr, false, NULL); > > + return -EAGAIN; > > + } > > Not sure why, but the PMD stuff here is not calling is_cow_mapping() > before doing the write protect. Seems like it might be an existing > bug? IMHO it's not a bug, because splitting a huge pmd should always be safe. One thing I can think of that might be special here is when the pmd is anonymously mapped but also shared (shared, tmpfs thp, I think?), then here we'll also mark it as wrprotected even if we don't need to (or maybe we need it for some reason..). But again I think it's safe anyways - when page fault happens, wp_huge_pmd() should split it into smaller pages unconditionally. I just don't know whether it's the ideal way for the shared case. Andrea should definitely know it better (because it is there since the 1st day of thp). > > In any event, the has_pinned logic shouldn't be used without also > checking is_cow_mapping(), so it should be added to that test. Same > remarks for PUD I think the case mentioned above is also the special case here when we didn't check is_cow_mapping(). The major difference is whether we'll split the page right now, or postpone it until the next write to each mm. But I think, yes, maybe I should better still keep the is_cow_mapping() to be explicit. Thanks, -- Peter Xu