From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7989C433E0 for ; Fri, 8 Jan 2021 09:01:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D0F1022248 for ; Fri, 8 Jan 2021 09:01:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D0F1022248 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0B84C8D016F; Fri, 8 Jan 2021 04:01:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 040CB8D0156; Fri, 8 Jan 2021 04:01:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4ADC8D016F; Fri, 8 Jan 2021 04:01:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0007.hostedemail.com [216.40.44.7]) by kanga.kvack.org (Postfix) with ESMTP id C53E38D0156 for ; Fri, 8 Jan 2021 04:01:44 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 81E5A180AD801 for ; Fri, 8 Jan 2021 09:01:44 +0000 (UTC) X-FDA: 77682014928.12.cart07_30118d2274f1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 5EE571802E518 for ; Fri, 8 Jan 2021 09:01:44 +0000 (UTC) X-HE-Tag: cart07_30118d2274f1 X-Filterd-Recvd-Size: 7619 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 Jan 2021 09:01:43 +0000 (UTC) Received: by mail-pf1-f173.google.com with SMTP id a188so5821284pfa.11 for ; Fri, 08 Jan 2021 01:01:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Cjm0D2L5k7Q/zrsATyEfG+PCms7IhU88+VBy/f8Uo4U=; b=wUwhN9pFbFMmDo+jM3gbYsgRf1IJ/zy78nN5H5P1d7Qputze3XWpWyPvd/QXfyVtJg zQqQpB/zZfQitcntFMKr0id5pjMjvSRj+O7rCBPsseUFvxAr8O1Y7i2W0igVtKY4ifel 2ANEPAW9gVCpceIaNkTxJHS0dQ3vLVLX9t2pgFzNzOqQFPreqNp44pdapvtx4M3Nba09 Y7Oa/Ll/GZcwZ6nLBLbMu9Q4/obiA/Q4WUd/JKs8uGhRjth8KIwD4eeKICJXMgQYqbNy sS8xXwRON5HUDF99AY6CsZbC/oCVDDabWDYQo4HxtgKKBISBN6GSp7r71069HUlg/k5W eOOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Cjm0D2L5k7Q/zrsATyEfG+PCms7IhU88+VBy/f8Uo4U=; b=lZ4yE2+Sza2TvOdUbDS3wqYqodFmtj31giel25NJXb1k4OnK0muI82TibRlHebF6hj HoQ+4e5sArejTld9Oq+7gYMUxlk64PsUUOndeMb36IMbOjitMKbODbUj/H2u2mNNc7u8 r4y9DKmEIuJHcEERm30a2uzTS5GfDZYOevMI21+xzJqssHgUeCcYjO9bQCPl3JruXBu6 WgkqUeHccDSrOfKhVyieZn/oFsjo6+KJOFBjra7zSmg2ctBdDKcOSlHWL/Y4esAqtjwv Qw0ibCC260r3lrpukJkItB4fqvu+LS6o7CkJxjKo3N966LvYegOQmq45bGDgFYakzIgj OwGA== X-Gm-Message-State: AOAM533IAVv6Z8qEG2vi75JVuiv1hAtj92O0kDcggrPylQ2vKwi4/ECj AY0tuE6dOHIz+L+3AcFNmNeG4aZ+DN7KiAyLY8a6rw== X-Google-Smtp-Source: ABdhPJxBPjAShOFhpA+88AbAENNZUsT4F+04eVgAj62CpRX2QePwkFPh07OB+igp/2Y22w45urKsKgVRTpcqE8/WoNk= X-Received: by 2002:a63:480f:: with SMTP id v15mr6017196pga.341.1610096502333; Fri, 08 Jan 2021 01:01:42 -0800 (PST) MIME-Version: 1.0 References: <20210106165632.GT13207@dhcp22.suse.cz> <20210107084146.GD13207@dhcp22.suse.cz> <20210107111827.GG13207@dhcp22.suse.cz> <20210107123854.GJ13207@dhcp22.suse.cz> <20210107141130.GL13207@dhcp22.suse.cz> <20210108084330.GW13207@dhcp22.suse.cz> In-Reply-To: <20210108084330.GW13207@dhcp22.suse.cz> From: Muchun Song Date: Fri, 8 Jan 2021 17:01:03 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v2 3/6] mm: hugetlb: fix a race between freeing and dissolving the page To: Michal Hocko Cc: Mike Kravetz , Andrew Morton , Naoya Horiguchi , Andi Kleen , Linux Memory Management List , LKML Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jan 8, 2021 at 4:43 PM Michal Hocko wrote: > > On Thu 07-01-21 23:11:22, Muchun Song wrote: > > On Thu, Jan 7, 2021 at 10:11 PM Michal Hocko wrote: > > > > > > On Thu 07-01-21 20:59:33, Muchun Song wrote: > > > > On Thu, Jan 7, 2021 at 8:38 PM Michal Hocko wrote: > > > [...] > > > > > Right. Can we simply back off in the dissolving path when ref count is > > > > > 0 && PageHuge() if list_empty(page->lru)? Is there any other scenario > > > > > when the all above is true and the page is not being freed? > > > > > > > > The list_empty(&page->lru) may always return false. > > > > The page before freeing is on the active list > > > > (hstate->hugepage_activelist).Then it is on the free list > > > > after freeing. So list_empty(&page->lru) is always false. > > > > > > The point I was trying to make is that the page has to be enqueued when > > > it is dissolved and freed. If the page is not enqueued then something > > > racing. But then I have realized that this is not a great check to > > > detect the race because pages are going to be released to buddy > > > allocator and that will reuse page->lru again. So scratch that and sorry > > > for the detour. > > > > > > But that made me think some more and one way to reliably detect the race > > > should be PageHuge() check in the freeing path. This is what dissolve > > > path does already. PageHuge becomes false during update_and_free_page() > > > while holding the hugetlb_lock. So can we use that? > > > > It may make the thing complex. Apart from freeing it to the > > buddy allocator, free_huge_page also does something else for > > us. If we detect the race in the freeing path, if it is not a HugeTLB > > page, the freeing path just returns. We also should move those > > things to the dissolve path. Right? > > Not sure what you mean. Dissolving is a subset of the freeing path. It > effectivelly only implements the update_and_free_page branch (aka free > to buddy). It skips some of the existing steps because it believes it > sees a freed page. But as you have pointed out this is racy and I > strongly suspect it is simply wrong in those assumptions. E.g. hugetlb > cgroup accounting can get wrong right? OK. I know what you mean. The update_and_free_page should do the freeing which is similar to __free_huge_page(). > > The more I think about it the more I think that dissolving path should > simply have a common helper with __free_huge_page that would release > the huge page to the allocator. The only thing that should be specific > to the dissolving path is HWpoison handling. It shouldn't be playing > with accounting and what not. Btw the HWpoison handling is suspicious as > well, a lost race would mean this doesn't happen. But maybe there is > some fixup handled later on... > > > But I find a tricky problem to solve. See free_huge_page(). > > If we are in non-task context, we should schedule a work > > to free the page. We reuse the page->mapping. If the page > > is already freed by the dissolve path. We should not touch > > the page->mapping. So we need to check PageHuge(). > > The check and llist_add() should be protected by > > hugetlb_lock. But we cannot do that. Right? If dissolve > > happens after it is linked to the list. We also should > > remove it from the list (hpage_freelist). It seems to make > > the thing more complex. > > I am not sure I follow you here but yes PageHuge under hugetlb_lock > should be the reliable way to check for the race. I am not sure why we > really need to care about mapping or other state. CPU0: CPU1: free_huge_page(page) if (PageHuge(page)) dissolve_free_huge_page(page) spin_lock(&hugetlb_lock) update_and_free_page(page) spin_unlock(&hugetlb_lock) llist_add(page->mapping) // the mapping is corrupted The PageHuge(page) and llist_add() should be protected by hugetlb_lock. Right? If so, we cannot hold hugetlb_lock in free_huge_page() path. > -- > Michal Hocko > SUSE Labs