From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63F0FC433F5 for ; Thu, 27 Jan 2022 08:14:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A6A506B0071; Thu, 27 Jan 2022 03:14:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F28C6B0072; Thu, 27 Jan 2022 03:14:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 893516B0073; Thu, 27 Jan 2022 03:14:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id 73E166B0071 for ; Thu, 27 Jan 2022 03:14:14 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2EAE38249980 for ; Thu, 27 Jan 2022 08:14:14 +0000 (UTC) X-FDA: 79075354428.06.3FFF39A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 80ECEC0008 for ; Thu, 27 Jan 2022 08:14:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1643271252; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=332zgtmH+EWSRF3snsZ1Kz3rtbqJhLDbMW389irdY9c=; b=N3oh7KDSjR8QSCF3SH/KZs1Jv93j6fa3rzxRGo42d1As+HbK4ILrt6T3oeVyQoz/zFEYer OoLDzvr9T3uN5SzUN4AgYxPSX1+v5CcvShYx3+PLHOHQMOneX9Wh9yuG6ByFePZdPmOwaA FzP6K/JKRN9Cnvvud2FYJQmcxDrHKRA= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-256-6XWDnZ5qOJ6Q3VbEpU4log-1; Thu, 27 Jan 2022 03:14:11 -0500 X-MC-Unique: 6XWDnZ5qOJ6Q3VbEpU4log-1 Received: by mail-wr1-f70.google.com with SMTP id a6-20020adfbc46000000b001d7370ace6eso752596wrh.9 for ; Thu, 27 Jan 2022 00:14:11 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent :content-language:to:cc:references:from:organization:subject :in-reply-to:content-transfer-encoding; bh=332zgtmH+EWSRF3snsZ1Kz3rtbqJhLDbMW389irdY9c=; b=DCNtG1L6lnNUdJohc8lvdUTMx8u5KmFJ0xUy79QqynSTGXbF7H6WznbSZ9OSM0fsMv 3l3fOGVYuAHAHGKHOkZeNQIaRAjQ6LrczErBsF4MAO/ewWyf8uDt6cY4oG06uwHCQQg7 xnzeGAQXiK2og+A+6uYbyp9wDMLYkXaYo3gTBZB1yD7OSqYLOu5SLbjnM0mL/cQZ0i3M ETTH8X7TlQil+lxTF9ZXk2jJhsM7mXjA6+Dj6balUmkTevEnP8nLQDdwReh02NoCqaCD /ZWsiO3ZatDE/1aHakLPQaH2JxbmdbRkIlf6yHd67YApn1vXq05T454IbtQcEVXEpiZU zlxg== X-Gm-Message-State: AOAM530FIiV0p6B7CHDikGjeFM4LBkGSV6RyJKankj0Duz2bylfIi82z w3xWE2xjle/51OlAr0zmDT9UlP5XuPn6oyv2Q4UXHOj4kUOZhCrdMflSuFFysViF45UHSWDbe9k I1Q5qAF1FN7s= X-Received: by 2002:adf:dc44:: with SMTP id m4mr2016217wrj.355.1643271250122; Thu, 27 Jan 2022 00:14:10 -0800 (PST) X-Google-Smtp-Source: ABdhPJxGIyXM+NTsPZiOOcTN0CZb2a1gVOKMozb4gtBexoN5OxuRr/wCTRIh2PIKKOC33JRVOtA67A== X-Received: by 2002:adf:dc44:: with SMTP id m4mr2016171wrj.355.1643271249823; Thu, 27 Jan 2022 00:14:09 -0800 (PST) Received: from ?IPV6:2003:cb:c70d:8300:4812:9d4f:6cd8:7f47? (p200300cbc70d830048129d4f6cd87f47.dip0.t-ipconnect.de. [2003:cb:c70d:8300:4812:9d4f:6cd8:7f47]) by smtp.gmail.com with ESMTPSA id l13sm5507818wmq.22.2022.01.27.00.14.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 27 Jan 2022 00:14:09 -0800 (PST) Message-ID: <1d1d4b01-961f-d7e7-491c-a482bdd3fded@redhat.com> Date: Thu, 27 Jan 2022 09:14:07 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.4.0 To: Yang Shi Cc: Linux Kernel Mailing List , Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Linux MM References: <20220126095557.32392-1-david@redhat.com> <20220126095557.32392-6-david@redhat.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH RFC v2 5/9] mm/huge_memory: streamline COW logic in do_huge_pmd_wp_page() In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 80ECEC0008 X-Rspam-User: nil Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=N3oh7KDS; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf28.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-Stat-Signature: n3qo65r9q5szhret61yy3azu3uafep1k X-HE-Tag: 1643271253-62869 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 26.01.22 21:36, Yang Shi wrote: > On Wed, Jan 26, 2022 at 2:00 AM David Hildenbrand wrote: >> >> We currently have a different COW logic for anon THP than we have for >> ordinary anon pages in do_wp_page(): the effect is that the issue reported >> in CVE-2020-29374 is currently still possible for anon THP: an unintended >> information leak from the parent to the child. >> >> Let's apply the same logic (page_count() == 1), with similar >> optimizations to remove additional references first as we really want to >> avoid PTE-mapping the THP and copying individual pages best we can. >> >> If we end up with a page that has page_count() != 1, we'll have to PTE-map >> the THP and fallback to do_wp_page(), which will always copy the page. >> >> Note that KSM does not apply to THP. >> >> I. Interaction with the swapcache and writeback >> >> While a THP is in the swapcache, the swapcache holds one reference on each >> subpage of the THP. So with PageSwapCache() set, we expect as many >> additional references as we have subpages. If we manage to remove the >> THP from the swapcache, all these references will be gone. >> >> Usually, a THP is not split when entered into the swapcache and stays a >> compound page. However, try_to_unmap() will PTE-map the THP and use PTE >> swap entries. There are no PMD swap entries for that purpose, consequently, >> we always only swapin subpages into PTEs. >> >> Removing a page from the swapcache can fail either when there are remaining >> swap entries (in which case COW is the right thing to do) or if the page is >> currently under writeback. >> >> Having a locked, R/O PMD-mapped THP that is in the swapcache seems to be >> possible only in corner cases, for example, if try_to_unmap() failed >> after adding the page to the swapcache. However, it's comparatively easy to >> handle. >> >> As we have to fully unmap a THP before starting writeback, and swapin is >> always done on the PTE level, we shouldn't find a R/O PMD-mapped THP in the >> swapcache that is under writeback. This should at least leave writeback >> out of the picture. >> >> II. Interaction with GUP references >> >> Having a R/O PMD-mapped THP with GUP references (i.e., R/O references) >> will result in PTE-mapping the THP on a write fault. Similar to ordinary >> anon pages, do_wp_page() will have to copy sub-pages and result in a >> disconnect between the GUP references and the pages actually mapped into >> the page tables. To improve the situation in the future, we'll need >> additional handling to mark anonymous pages as definitely exclusive to a >> single process, only allow GUP pins on exclusive anon pages, and >> disallow sharing of exclusive anon pages with GUP pins e.g., during >> fork(). >> >> III. Interaction with references from LRU pagevecs >> >> Similar to ordinary anon pages, we can have LRU pagevecs referencing our >> THP. Reliably removing such references requires draining LRU pagevecs on >> all CPUs -- lru_add_drain_all() -- a possibly expensive operation that can >> sleep. For now, similar do do_wp_page(), let's conditionally drain the >> local LRU pagevecs only if we detect !PageLRU(). >> >> IV. Interaction with speculative/temporary references >> >> Similar to ordinary anon pages, other speculative/temporary references on >> the THP, for example, from the pagecache or page migration code, will >> disallow exclusive reuse of the page. We'll have to PTE-map the THP. >> >> Signed-off-by: David Hildenbrand >> --- >> mm/huge_memory.c | 19 +++++++++++++++---- >> 1 file changed, 15 insertions(+), 4 deletions(-) >> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index 406a3c28c026..b6ba88a98266 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -1286,6 +1286,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) >> struct page *page; >> unsigned long haddr = vmf->address & HPAGE_PMD_MASK; >> pmd_t orig_pmd = vmf->orig_pmd; >> + int swapcache_refs = 0; >> >> vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd); >> VM_BUG_ON_VMA(!vma->anon_vma, vma); >> @@ -1303,7 +1304,6 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) >> page = pmd_page(orig_pmd); >> VM_BUG_ON_PAGE(!PageHead(page), page); >> >> - /* Lock page for reuse_swap_page() */ >> if (!trylock_page(page)) { >> get_page(page); >> spin_unlock(vmf->ptl); >> @@ -1319,10 +1319,20 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) >> } >> >> /* >> - * We can only reuse the page if nobody else maps the huge page or it's >> - * part. >> + * See do_wp_page(): we can only map the page writable if there are >> + * no additional references. >> */ >> - if (reuse_swap_page(page)) { >> + if (PageSwapCache(page)) >> + swapcache_refs = thp_nr_pages(page); >> + if (page_count(page) > 1 + swapcache_refs + !PageLRU(page)) >> + goto unlock_fallback; >> + if (!PageLRU(page)) >> + lru_add_drain(); > > IMHO, draining lru doesn't help out too much for THP since THP will be > drained to LRU immediately once it is added into pagevec. Oh, thanks, I think you're right. The interesting bit is static bool pagevec_add_and_need_flush(struct pagevec *pvec, struct page *page) { bool ret = false; if (!pagevec_add(pvec, page) || PageCompound(page) || lru_cache_disabled()) ret = true; return ret; } Which indeed drains after adding it to the pagevec. Will adjust the patch and update the description/comment accordingly, thanks! -- Thanks, David / dhildenb