From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CF7FC19F2B for ; Thu, 4 Aug 2022 08:05:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D611B8E0002; Thu, 4 Aug 2022 04:05:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D0E548E0001; Thu, 4 Aug 2022 04:05:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD6E48E0002; Thu, 4 Aug 2022 04:05:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id AD4728E0001 for ; Thu, 4 Aug 2022 04:05:16 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 80ABB160317 for ; Thu, 4 Aug 2022 08:05:16 +0000 (UTC) X-FDA: 79761175032.26.FA28A67 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) by imf15.hostedemail.com (Postfix) with ESMTP id 64489A0119 for ; Thu, 4 Aug 2022 08:05:15 +0000 (UTC) Received: by mail-pf1-f174.google.com with SMTP id h28so12334933pfq.11 for ; Thu, 04 Aug 2022 01:05:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=QaFRHMMRbpzASatgAVPUPQZKJhgxQNqYDdRf7vNiVd8=; b=BiWJ1T0IDjvvp6+O2oriA2Yt9F5IEyE++T0y9DAvYGCyD96+ok6r7UkgeXluVbRkW9 giBOF1dpc5xiF5YhwTtonTOWObge9F7odFwOx1fwJToHxc3VXnRSC2M/Ht5jpkD91Gtv mzGEF9h1jjys3+j9Bz3817vsTQOjDQlaO03No+h8f4KOiem10rxrB4UfL+BC37so9mUD YqahrxOfaUf1LKGOW0h2Mvr9STCKQoRyXtJzc8mKuydna3fl7HL2KbAFFS9EabKXjLNR d7DDg/CgEhAKU5vmxSxPSdT7jAcnS1qu9V84BtClc3N990J55ESR59KMCMgSjhnZzveM PikA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=QaFRHMMRbpzASatgAVPUPQZKJhgxQNqYDdRf7vNiVd8=; b=NVShIlnYT6DR+tmDezXL78cPnFDdn0dTDD5hhIP3oSQBky1pUN2hBJVE0C1K2jn+KW GfD+RoohfAWQbpnQezyFJPGkPSKysD1DurrtWhuY7iKQ2pMVAwJXgyfYGVx+0DAN2bYQ cviRpdwJfctKhfC4uCvfUK4JubIQs4WKQAJhrk0NatZhuApi0Uh1plccRa5D7P57eKya UKVf2PVvPCg//ietxzSMmA3U5KQM4gA30dHiz5Pvi+jiDg+znaY4W+LcSB6vF6qBZ8PD BNfx9QhNJ+JPaa6+LYA/bNjZYmivTmZiar29VNkoJml/H2FUwEr0GPD9HrLfQswyxNm8 llrg== X-Gm-Message-State: ACgBeo12ogUTQusMxGfU34w6WvyuB7AETnLBSJBqGeEVGawGLcM/RI9j 0laFYN2RYgHylEmUJIDaFmw2mw== X-Google-Smtp-Source: AA6agR4GCoJZgN2BQH4YwaKI6psY+f/SedF1SbImutoLsY6zJkOkYofV+mBYShVpaBnxxMU/5vJ/vg== X-Received: by 2002:a05:6a00:24c2:b0:52e:7181:a8a0 with SMTP id d2-20020a056a0024c200b0052e7181a8a0mr531788pfv.57.1659600314085; Thu, 04 Aug 2022 01:05:14 -0700 (PDT) Received: from localhost ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id c1-20020a056a00008100b0052d1341a2c6sm214353pfj.177.2022.08.04.01.05.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Aug 2022 01:05:13 -0700 (PDT) Date: Thu, 4 Aug 2022 16:05:00 +0800 From: Muchun Song To: Mike Kravetz Cc: Joao Martins , linux-mm@kvack.org, Andrew Morton Subject: Re: [PATCH v1] mm/hugetlb_vmemmap: remap head page to newly allocated page Message-ID: References: <20220802180309.19340-1-joao.m.martins@oracle.com> <0b085bb1-b5f7-dfc2-588a-880de0d77ea2@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659600316; a=rsa-sha256; cv=none; b=y24yb8FjV61XmokArHs0WodIH57Rd5DWBw/0ehwt5/A11sZ2ckfs/HTTnRq/X8oEiVBmlG BPPxMLlW0A6tMMwvp1LVrVF+5wLC1a/9n3VUliLCNKMm0QCspPHP/l7AjJBHU05/ZPk7yz SRdYOGV2jMpfdy0IKeRSHco8uXI7ap8= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=BiWJ1T0I; spf=pass (imf15.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.174 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659600316; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QaFRHMMRbpzASatgAVPUPQZKJhgxQNqYDdRf7vNiVd8=; b=5f85lRwvfxEX8SQk+F+WkryMZXGr3w2wBuQlJfNlAOxVX1dmUHWBiszsibI6ufGirqownO eJTevYJ2BYfnxprVyx3jVUE2CU+kl5f1cF/YL/n/HF778xYDg7Zm2gMfhwM4GNAJ2crAxp u/VwapqXud//ecYOYnR1O7gfIeCEF8E= X-Stat-Signature: xzisutj9xujz9o5soyydckiz6f9j5hds X-Rspamd-Queue-Id: 64489A0119 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=BiWJ1T0I; spf=pass (imf15.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.174 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1659600315-181796 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Aug 03, 2022 at 03:42:15PM -0700, Mike Kravetz wrote: > On 08/03/22 18:44, Muchun Song wrote: > > On Wed, Aug 03, 2022 at 10:52:13AM +0100, Joao Martins wrote: > > > On 8/3/22 05:11, Muchun Song wrote: > > > > On Tue, Aug 02, 2022 at 07:03:09PM +0100, Joao Martins wrote: > > > >> Today with `hugetlb_free_vmemmap=on` the struct page memory that is > > > >> freed back to page allocator is as following: for a 2M hugetlb page it > > > >> will reuse the first 4K vmemmap page to remap the remaining 7 vmemmap > > > >> pages, and for a 1G hugetlb it will remap the remaining 4095 vmemmap > > > >> pages. Essentially, that means that it breaks the first 4K of a > > > >> potentially contiguous chunk of memory of 32K (for 2M hugetlb pages) or > > > >> 16M (for 1G hugetlb pages). For this reason the memory that it's free > > > >> back to page allocator cannot be used for hugetlb to allocate huge pages > > > >> of the same size, but rather only of a smaller huge page size: > > > >> > > > > > > > > Hi Joao, > > > > > > > > Thanks for your work on this. I admit you are right. The current mechanism > > > > prevented the freed vmemmap pages from being mergerd into a potential > > > > contiguous page. Allocating a new head page is straightforward approach, > > > > however, it is very dangerous at runtime after system booting up. Why > > > > dangerous? Because you should first 1) copy the content from the head vmemmap > > > > page to the targeted (newly allocated) page, and then 2) change the PTE > > > > entry to the new page. However, the content (especially the refcount) of the > > > > old head vmemmmap page could be changed elsewhere (e.g. other modules) > > > > between the step 1) and 2). Eventually, the new allocated vmemmap page is > > > > corrupted. Unluckily, we don't have an easy approach to prevent it. > > > > > > > OK, I see what I missed. You mean the refcount (or any other data) on the > > > preceeding struct pages to this head struct page unrelated to the hugetlb > > > page being remapped. Meaning when the first struct page in the old vmemmap > > > page is *not* the head page we are trying to remap right? > > > > > > See further below in your patch but I wonder if we could actually check this > > > from the hugetlb head va being aligned to PAGE_SIZE. Meaning that we would be checking > > > that this head page is the first of struct page in the vmemmap page that > > > we are still safe to remap? If so, the patch could be simpler more > > > like mine, without the special freeing path you added below. > > > > > > If I'm right, see at the end. > > > > > > I am not sure we are on the same page (it seems that we are not after I saw your > > below patch). So let me make it become more clarified. > > Thanks Muchun! > I told Joao that you would be the expert in this area and was correct. > Thanks for your trust. > > > > CPU0: CPU1: > > > > vmemmap_remap_free(start, end, reuse) > > // alloc a new page used to be the head vmemmap page > > page = alloc_pages_node(); > > > > memcpy(page_address(page), reuse, PAGE_SIZE); > > // Now the @reuse address is mapped to the original > > // page frame. So the change will be reflected on the > > // original page frame. > > get_page(reuse); > > Here is a thought. > > This code gets called early after allocating a new hugetlb page. This new > compound page has a ref count of 1 on the head page and 0 on all the tail > pages. If the ref count was 0 on the head page, get_page() would not succeed. > > I can not think of a reason why we NEED to have a ref count of 1 on the > head page. It does make it more convenient to simply call put_page() on > the newly initialized hugetlb page and have the page be added to the huegtlb > free lists, but this could easily be modified. > > Matthew Willcox recently pointed me at this: > https://lore.kernel.org/linux-mm/20220531150611.1303156-1-willy@infradead.org/T/#m98fb9f9bd476155b4951339da51a0887b2377476 > > That would only work for hugetlb pages allocated from buddy. For gigantic > pages, we manually 'freeze' (zero ref count) of tail pages and check for > an unexpected increased ref count. We could do the same with the head page. > > Would having zero ref count on the head page eliminate this race? I think most races which try to grab an extra refcount could be avoided in this case. However, I can figure out a race related to memory failure, that is to poison a page in memory_failure(), which set PageHWPoison without checking if the refcount is zero. If we can fix this easily, I think this patch is a good direction. Thanks Mike. > -- > Mike Kravetz > > > vmemmap_remap_pte(); > > // remap to the above new allocated page > > set_pte_at(); > > > > flush_tlb_kernel_range(); > > // Now the @reuse address is mapped to the new allocated > > // page frame. So the change will be reflected on the > > // new page frame and it is corrupted. > > put_page(reuse); > > > > So we should make 1) memcpy, 2) remap and 3) TLB flush atomic on CPU0, however > > it is not easy. >