From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A43E8C19F2D for ; Fri, 5 Aug 2022 02:54:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F05DF8E0003; Thu, 4 Aug 2022 22:54:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB47B8E0001; Thu, 4 Aug 2022 22:54:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D2F5D8E0003; Thu, 4 Aug 2022 22:54:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C13428E0001 for ; Thu, 4 Aug 2022 22:54:38 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 94D88403A9 for ; Fri, 5 Aug 2022 02:54:38 +0000 (UTC) X-FDA: 79764021036.06.098823A Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by imf12.hostedemail.com (Postfix) with ESMTP id 0000240132 for ; Fri, 5 Aug 2022 02:54:37 +0000 (UTC) Received: by mail-pg1-f174.google.com with SMTP id f65so1518925pgc.12 for ; Thu, 04 Aug 2022 19:54:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=8UgQm3I8PxoCGiFLhrYKWelvXf0yqI+IiDlnmSIMWZs=; b=X7FAI8ULmXITvFgkwDMQ53IY63MDZsqLk+BoiioodS7ZsGv804UxRswXEPlhgScczh +Ifb/nTUCnYkTjaWgRfVoF1E3PLyq5DSVZ97eLZQ9/pvreYK2J4abud75x8JPkrvkwjd t+saB63oDpJeqR38O/U72ZA23wY7MsMg44XF/bkApwynUlrwMeLsFHu/Xpe0qdWxMmqE lCfgNfIMK9cBzojTrMxGfkt4fvseM4vfitA2cphlrYUrtyNmaL0epfKgKpEtuHHALMVi +WM3XcgVq2mKh0Dcb+yjMmrw/IZpyZWSc38ncWqYDh4Vkd0g4dCFiVU5sxauvvqYE/ob Nmuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=8UgQm3I8PxoCGiFLhrYKWelvXf0yqI+IiDlnmSIMWZs=; b=c2+7DtIFJb64dEii9d/YW430eVggeEMRnqJTjOp/mISa4QQhKRYwQnGHnwvgqKww4m DNgiaDPAwp44/p3rW8oka3yc7AWnYngd0iy0zYWPlAkYJL6tRnZWqznPn2PzN/TNPNdJ 8VoWeq0kWD83aHt0ohYa5NIjYBm/66YEIA8PhqDduqD3EAzPgTyfB3M7Dw/b4hJiLn9y YjxJ//9I4O24n2fAh+mQWmKdybgizZf4ltsGju+NJw9dACKipcI1zfkA0uUj73As0rUK Ws8sCPXFRUoS4ONF78MIJCKQhObfK5jyKiur3xQhkKpNTaBTK8N6OUn1u1zMehNqPzU7 3IXg== X-Gm-Message-State: ACgBeo1g4jjVGp96zpCOaKyN9XK+5jYHpHNNBfyhscVYfLNI+9a8lWq6 96wEfhlomffosDKuuKJBSdsY9A== X-Google-Smtp-Source: AA6agR49uygSpHxMLHwAPg0lpP9+KzByp+dESfAorRP+3Etyf/pnDgS7QS+R1FnPTsZ8W7o/wkBuYQ== X-Received: by 2002:a63:4711:0:b0:415:d09b:1d33 with SMTP id u17-20020a634711000000b00415d09b1d33mr3947991pga.469.1659668076782; Thu, 04 Aug 2022 19:54:36 -0700 (PDT) Received: from localhost ([139.177.225.249]) by smtp.gmail.com with ESMTPSA id b17-20020a170903229100b0016dc8932725sm1638999plh.285.2022.08.04.19.54.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Aug 2022 19:54:36 -0700 (PDT) Date: Fri, 5 Aug 2022 10:54:23 +0800 From: Muchun Song To: Mike Kravetz Cc: Joao Martins , linux-mm@kvack.org, Naoya Horiguchi , Andrew Morton Subject: Re: [PATCH v1] mm/hugetlb_vmemmap: remap head page to newly allocated page Message-ID: References: <20220802180309.19340-1-joao.m.martins@oracle.com> <0b085bb1-b5f7-dfc2-588a-880de0d77ea2@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659668078; a=rsa-sha256; cv=none; b=wkEmrE19l0ToDeG3Ci8MHV0kM7lPLsXicJTIMRxMK24/i8YqXXElXYy7olOsSs/sqpw+Be 3XTeUPUETNKnbAd8eMtNTsSjJ5r6G6FeSI6mu+pI0+LSDXuKYgaCX1pKhs9EaNsKJpgvpt fcbK8DXkV1PtbVfSFYaOaPaMuq0hYGo= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=X7FAI8UL; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.174 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659668078; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8UgQm3I8PxoCGiFLhrYKWelvXf0yqI+IiDlnmSIMWZs=; b=LKZ1NJi5zArAaH9MFL+dBz1kKT5B6uB8AdVU0AK9YgysFpladvgea9Oeviddi6TiA8fxcq T6KRigSo3Cewxpz87Xn6zfsXJq8E8ac7H9cs1KnyBBtnS3b7+5DGHGbDwj/FnHyRscNwdK aanms4MalzB+oI9gJbX3cppIg5afTTk= Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=X7FAI8UL; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.174 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0000240132 X-Rspam-User: X-Stat-Signature: ytgou7pp3gxdzmeyna9pr9wy4attjay6 X-HE-Tag: 1659668077-578182 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Aug 04, 2022 at 09:56:39AM -0700, Mike Kravetz wrote: > On 08/04/22 16:05, Muchun Song wrote: > > On Wed, Aug 03, 2022 at 03:42:15PM -0700, Mike Kravetz wrote: > > > On 08/03/22 18:44, Muchun Song wrote: > > > > On Wed, Aug 03, 2022 at 10:52:13AM +0100, Joao Martins wrote: > > > > > On 8/3/22 05:11, Muchun Song wrote: > > > > > > On Tue, Aug 02, 2022 at 07:03:09PM +0100, Joao Martins wrote: > > > > I am not sure we are on the same page (it seems that we are not after I saw your > > > > below patch). So let me make it become more clarified. > > > > > > Thanks Muchun! > > > I told Joao that you would be the expert in this area and was correct. > > > > > > > Thanks for your trust. > > > > > > > > > > CPU0: CPU1: > > > > > > > > vmemmap_remap_free(start, end, reuse) > > > > // alloc a new page used to be the head vmemmap page > > > > page = alloc_pages_node(); > > > > > > > > memcpy(page_address(page), reuse, PAGE_SIZE); > > > > // Now the @reuse address is mapped to the original > > > > // page frame. So the change will be reflected on the > > > > // original page frame. > > > > get_page(reuse); > > > > > > Here is a thought. > > > > > > This code gets called early after allocating a new hugetlb page. This new > > > compound page has a ref count of 1 on the head page and 0 on all the tail > > > pages. If the ref count was 0 on the head page, get_page() would not succeed. > > > > > > I can not think of a reason why we NEED to have a ref count of 1 on the > > > head page. It does make it more convenient to simply call put_page() on > > > the newly initialized hugetlb page and have the page be added to the huegtlb > > > free lists, but this could easily be modified. > > > > > > Matthew Willcox recently pointed me at this: > > > https://lore.kernel.org/linux-mm/20220531150611.1303156-1-willy@infradead.org/T/#m98fb9f9bd476155b4951339da51a0887b2377476 > > > > > > That would only work for hugetlb pages allocated from buddy. For gigantic > > > pages, we manually 'freeze' (zero ref count) of tail pages and check for > > > an unexpected increased ref count. We could do the same with the head page. > > > > > > Would having zero ref count on the head page eliminate this race? > > > > I think most races which try to grab an extra refcount could be avoided > > in this case. > > > > However, I can figure out a race related to memory failure, that is to poison > > a page in memory_failure(), which set PageHWPoison without checking if the > > refcount is zero. If we can fix this easily, I think this patch is a good > > direction. > > Adding Naoya. > > Yes, I recall that discussion in the thread, > https://lore.kernel.org/linux-mm/3c542543-0965-ef60-4627-1a4116077a5b@huawei.com/ > > I don't have any ideas about how to avoid that issue. > > Naoya notes in that thread, the issue is poisioning a generic compound page > that may turn into a hugetlb page. There may be a way to work around this. > > However, IMO we may be trying too hard to cover ALL memory error handling races > with hugetlb. We know that memory errors can happen at ANY time, and a page > could be marked poision at any time. I suspect there are many paths where bad > things could happen if a memory error happens at 'just the right time'. For > example, I believe a page could be poisioned at the very end of the page > allocation path and returned to the caller requesting the page. Certainly, > not every caller of page allocation checks for and handles a poisioned page. > In general, we know that memory error handling will not be 100% perfect > and can not be handled in ALL code paths. In some cases if a memory error > happens at 'just the right time', bad things will happen. It would be almost > impossible to handle a memory error at any time in all code paths. Someone > please correct me if this is not accepted/known situation. > > It seems there has been much effort lately to try and catch all hugetlb races > with memory error handling. That is OK, but I think we need to accept that > there may be races with code paths that are not worth the effort to try and > cover. Just my opinion of course. > > For this specific proposal by Joao, I think we can handle most of the races > if all sub-pages of the hugetlb (including head) have a ref count of zero. > I actually like moving in this direction as it means we could remove some > existing hugetlb code checking for increased ref counts. > Totally agree. > I can start working on code to move in this direction. We will need to > wait for the alloc_frozen_pages() interface and will need to make changes > to other allocators for gigantic pages. Until then we can manually zero the > ref counts before calling the vmemmap freeing routines. With this in place, > I would say that we do something like Joao proposes to free more contiguous > vmemmap. > > Thoughts? I think it is the right direction. I can provide code review once you finish this work. Thanks Mike. > > In any case, I am going to move forward with setting ref count to zero of > all 'hugetlb pages' before they officially become hugetlb pages. As mentioned, > this should allow for removal of some code checking for inflated ref counts. > -- > Mike Kravetz >