From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13582C433EF for ; Tue, 28 Jun 2022 10:37:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B3008E0002; Tue, 28 Jun 2022 06:37:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 762C28E0001; Tue, 28 Jun 2022 06:37:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6040B8E0002; Tue, 28 Jun 2022 06:37:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4E2818E0001 for ; Tue, 28 Jun 2022 06:37:26 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 13D4320C14 for ; Tue, 28 Jun 2022 10:37:26 +0000 (UTC) X-FDA: 79627292892.24.EA518C6 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by imf19.hostedemail.com (Postfix) with ESMTP id 9A7111A000D for ; Tue, 28 Jun 2022 10:37:24 +0000 (UTC) Received: by mail-pg1-f178.google.com with SMTP id 68so11784226pgb.10 for ; Tue, 28 Jun 2022 03:37:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=bA/LvK4VSTipcL5wV6o8JEN/FGRXth+GfF/ix8DPlqk=; b=1nsW17eI0m+TEU/Df0xZQtSTzWNuPJPL+cLBH9mby00SGpNKYQRzuIOAEoEMoc+VKe iCAT0u1uqqmmGT4PwhCCk+O+G7OnqLCTQOTw7/mKjzMdIPZtT7TSe8vLV+dHBc16dL6X C4Pw5FCCR16UdzC0sWtGpp9sE9mn8xU85ECfUiu9azx1vbpEeuGGP5pJeB3i4pGHRxAc r4hcwChYJc+B/j/LDiDB16YmSmGUhWwh2berMI2z/BtIDEyeFfyHsYa8H/sl1mWeI0Re TtGn2m1sQAYM2Dpy7yReXWCYc6jBYjubil8OCQZNk797m47PzWLQqpVtvB6NLN42AsPy zT4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=bA/LvK4VSTipcL5wV6o8JEN/FGRXth+GfF/ix8DPlqk=; b=IcaEORR1nRTspc/0/uRwWlywo/3tGJ+xDOgIC+faT/dT0agTWXi998q4a5nFldIysM EQrfUq+fnqHrEh6rAyyIsJ5/kTwF0yEXddwv3d3cy1g/ABkVdiXDtNjONbULCoBbNZfy /UcxDwABYbQvGyXLs65u7a+XjWV3NDa4kSFKc0PsHJ6Rr4PVUcvKi5aDz+n0B1TAmewc /DAL9Lit762SmZdf/V0sfXOiaDJxAnLPGiHRtCPA9zE7mR4Y8Nm7mNxzi0jUHAADbh03 0X8Z4lpfrcEV3UyyBfh+4CXDJhy1XlFPDLwPd5tR0nMu1BN4KDPQlgzMLqdV740e/Fne Znkw== X-Gm-Message-State: AJIora9poYTWGCL4bX9wP8ZVUZM0I4y/9E6RM9kZHEhfO434dq+sSYtP 8jvjcqdTq214NBsdOM8MWqcG0w== X-Google-Smtp-Source: AGRyM1sgdtMMGgOPriW4iR/+QS0YLrSxxcIGWj1Rf59BTEslFGmjr3RopsGIwcWviAn2MFtjl1Oucg== X-Received: by 2002:a63:1259:0:b0:40d:d290:24ef with SMTP id 25-20020a631259000000b0040dd29024efmr13117043pgs.141.1656412643393; Tue, 28 Jun 2022 03:37:23 -0700 (PDT) Received: from localhost ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id z9-20020a1709027e8900b0016b865ea2d6sm2670238pla.82.2022.06.28.03.37.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Jun 2022 03:37:22 -0700 (PDT) Date: Tue, 28 Jun 2022 18:37:18 +0800 From: Muchun Song To: HORIGUCHI =?utf-8?B?TkFPWUEo5aCA5Y+j44CA55u05LmfKQ==?= Cc: Naoya Horiguchi , "linux-mm@kvack.org" , Andrew Morton , David Hildenbrand , Mike Kravetz , Miaohe Lin , Liu Shixin , Yang Shi , Oscar Salvador , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH v2 4/9] mm, hwpoison, hugetlb: support saving mechanism of raw error pages Message-ID: References: <20220623235153.2623702-1-naoya.horiguchi@linux.dev> <20220623235153.2623702-5-naoya.horiguchi@linux.dev> <20220628024121.GF2159330@hori.linux.bs1.fc.nec.co.jp> <20220628081754.GA2206088@hori.linux.bs1.fc.nec.co.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20220628081754.GA2206088@hori.linux.bs1.fc.nec.co.jp> ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=1nsW17eI; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf19.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.178 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656412645; a=rsa-sha256; cv=none; b=hH4I/Vn8/sgC/uO4wrMGL6KIRMsFHLFus/bKAdXHEZ7CYOWXoz+sP6ouvve6SkWlNkAXSo oKCb01VgwWPD8JcFKuvcqs6KGsOS8WHsl2f5+mnwBTSUI40yL0nnNO/bEqWgskBWogvh6X 10PuuxYiGYVcs+kyiRD/ewUECD8YWoQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656412645; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bA/LvK4VSTipcL5wV6o8JEN/FGRXth+GfF/ix8DPlqk=; b=r9SIy8UNUJH9ZjmdWvtaynL6p9isCWuA9OsXN/SWLf1LcEdt9V8TEly9x+ymI2V20pHsrF 9a89JYX3CDFnyF67ZEZfL38YSWe915Kw0eSh2KMdgpOSnhbj9GKtY6reC//ybGMW6d5cv0 os8XhyTIKgRroViMBnUJvt+KKBlAAZk= X-Stat-Signature: pebtqyuonec1z49uy9zkksmjsdhsw4ne X-Rspamd-Server: rspam08 X-Rspam-User: X-Rspamd-Queue-Id: 9A7111A000D Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=1nsW17eI; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf19.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.178 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1656412644-472934 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jun 28, 2022 at 08:17:55AM +0000, HORIGUCHI NAOYA(堀口 直也) wrote: > On Tue, Jun 28, 2022 at 02:26:47PM +0800, Muchun Song wrote: > > On Tue, Jun 28, 2022 at 02:41:22AM +0000, HORIGUCHI NAOYA(堀口 直也) wrote: > > > On Mon, Jun 27, 2022 at 05:26:01PM +0800, Muchun Song wrote: > > > > On Fri, Jun 24, 2022 at 08:51:48AM +0900, Naoya Horiguchi wrote: > > > > > From: Naoya Horiguchi > ... > > > > > + } else { > > > > > + /* > > > > > + * Failed to save raw error info. We no longer trace all > > > > > + * hwpoisoned subpages, and we need refuse to free/dissolve > > > > > + * this hwpoisoned hugepage. > > > > > + */ > > > > > + set_raw_hwp_unreliable(hpage); > > > > > + return ret; > > > > > + } > > > > > + return ret; > > > > > +} > > > > > + > > > > > +inline int hugetlb_clear_page_hwpoison(struct page *hpage) > > > > > +{ > > > > > + struct llist_head *head; > > > > > + struct llist_node *t, *tnode; > > > > > + > > > > > + if (raw_hwp_unreliable(hpage)) > > > > > + return -EBUSY; > > > > > > > > IIUC, we use head page's PageHWPoison to synchronize hugetlb_clear_page_hwpoison() > > > > and hugetlb_set_page_hwpoison(), right? If so, who can set hwp_unreliable here? > > > > > > Sorry if I might miss your point, but raw_hwp_unreliable is set when > > > allocating raw_hwp_page failed. hugetlb_set_page_hwpoison() can be called > > > > Sorry. I have missed this. Thanks for your clarification. > > > > > multiple times on a hugepage and if one of the calls fails, the hwpoisoned > > > hugepage becomes unreliable. > > > > > > BTW, as you pointed out above, if we switch to passing GFP_ATOMIC to kmalloc(), > > > the kmalloc() never fails, so we no longer have to implement this unreliable > > > > No. kmalloc() with GFP_ATOMIC can fail unless I miss something important. > > OK, I've interpretted the comment about GFP_ATOMIC wrongly. > > * %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower > * watermark is applied to allow access to "atomic reserves". > > > > > flag, so things get simpler. > > > > > > > > > > > > + ClearPageHWPoison(hpage); > > > > > + head = raw_hwp_list_head(hpage); > > > > > + llist_for_each_safe(tnode, t, head->first) { > > > > > > > > Is it possible that a new item is added hugetlb_set_page_hwpoison() and we do not > > > > traverse it (we have cleared page's PageHWPoison)? Then we ignored a real hwpoison > > > > page, right? > > > > > > Maybe you are mentioning the race like below. Yes, that's possible. > > > > > > > Sorry, ignore my previous comments, I'm thinking something wrong. > > > > > CPU 0 CPU 1 > > > > > > free_huge_page > > > lock hugetlb_lock > > > ClearHPageMigratable > > remove_hugetlb_page() > > // the page is non-HugeTLB now > > Oh, I missed that. > > > > unlock hugetlb_lock > > > get_huge_page_for_hwpoison > > > lock hugetlb_lock > > > __get_huge_page_for_hwpoison > > > > // cannot reach here since it is not a HugeTLB page now. > > // So this race is impossible. Then we fallback to normal > > // page handling. Seems there is a new issue here. > > // > > // memory_failure() > > // try_memory_failure_hugetlb() > > // if (hugetlb) > > // goto unlock_mutex; > > // if (TestSetPageHWPoison(p)) { > > // // This non-HugeTLB page's vmemmap is still optimized. > > > > Setting COMPOUND_PAGE_DTOR after hugetlb_vmemmap_restore() might fix this > > issue, but we will encounter this race as you mentioned below. > > I don't have clear ideas about this now (I don't test vmemmap-optimized case > yet), so I will think more about this case. Maybe memory_failure() need > detect it because memory_failure() heaviliy depends on the status of struct > page. > Because HVO (HugeTLB Vmemmap Optimization) will map all tail vmemmap pages with read-only, we cannot write any data to some tail struct pages. It is a new issue unrelated to this patch. Thanks. > Thanks, > Naoya Horiguchi > > > > > Thanks. > > > > > hugetlb_set_page_hwpoison > > > allocate raw_hwp_page > > > TestSetPageHWPoison > > > update_and_free_page > > > __update_and_free_page > > > if (PageHWPoison) > > > hugetlb_clear_page_hwpoison > > > TestClearPageHWPoison > > > // remove all list items > > > llist_add > > > unlock hugetlb_lock > > > > > > > > > The end result seems not critical (leaking raced raw_hwp_page?), but > > > we need fix.