From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5EBBC433C1 for ; Tue, 23 Mar 2021 07:57:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5D25E619BA for ; Tue, 23 Mar 2021 07:57:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5D25E619BA Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CEC146B011B; Tue, 23 Mar 2021 03:57:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C9CF56B011E; Tue, 23 Mar 2021 03:57:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B63A06B011F; Tue, 23 Mar 2021 03:57:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0066.hostedemail.com [216.40.44.66]) by kanga.kvack.org (Postfix) with ESMTP id 99A306B011B for ; Tue, 23 Mar 2021 03:57:55 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5E69A83EF for ; Tue, 23 Mar 2021 07:57:55 +0000 (UTC) X-FDA: 77950385310.11.F6DC273 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf26.hostedemail.com (Postfix) with ESMTP id E2C35407F8FA for ; Tue, 23 Mar 2021 07:57:53 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1616486273; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=m+xTnYW1KWqoinuxGp7EGKTcA3wA3snicXhoUVTBaLY=; b=gBI7u2txerAlccab8XBKnv33BbpVCouLB++S/f+B0QjRW5IiiVv71zU/PRXmDO98S/q+Ji 0nL/hGlX1rc1pdUc4EAME83m79HAcLyE5NnE8M+f32cTcreLVwooL4ljboejCZxirnWUDx rlnb8xLk9dADwrdKE9+UHE1cLuzBLPc= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 86F56AC1D; Tue, 23 Mar 2021 07:57:53 +0000 (UTC) Date: Tue, 23 Mar 2021 08:57:46 +0100 From: Michal Hocko To: Mike Kravetz Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt , Oscar Salvador , David Hildenbrand , Muchun Song , David Rientjes , Miaohe Lin , Peter Zijlstra , Matthew Wilcox , HORIGUCHI NAOYA , "Aneesh Kumar K . V" , Waiman Long , Peter Xu , Mina Almasry , Andrew Morton Subject: Re: [RFC PATCH 5/8] hugetlb: change free_pool_huge_page to remove_pool_huge_page Message-ID: References: <20210319224209.150047-1-mike.kravetz@oracle.com> <20210319224209.150047-6-mike.kravetz@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E2C35407F8FA X-Stat-Signature: bz5thom7ns8hdom71qqn3uom11ipupmf Received-SPF: none (suse.com>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616486273-417164 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon 22-03-21 16:28:07, Mike Kravetz wrote: > On 3/22/21 7:31 AM, Michal Hocko wrote: > > On Fri 19-03-21 15:42:06, Mike Kravetz wrote: > > [...] > >> @@ -2090,9 +2084,15 @@ static void return_unused_surplus_pages(struct hstate *h, > >> while (nr_pages--) { > >> h->resv_huge_pages--; > >> unused_resv_pages--; > >> - if (!free_pool_huge_page(h, &node_states[N_MEMORY], 1)) > >> + page = remove_pool_huge_page(h, &node_states[N_MEMORY], 1); > >> + if (!page) > >> goto out; > >> - cond_resched_lock(&hugetlb_lock); > >> + > >> + /* Drop lock and free page to buddy as it could sleep */ > >> + spin_unlock(&hugetlb_lock); > >> + update_and_free_page(h, page); > >> + cond_resched(); > >> + spin_lock(&hugetlb_lock); > >> } > >> > >> out: > > > > This is likely a matter of taste but the repeated pattern of unlock, > > update_and_free_page, cond_resched and lock seems rather clumsy. > > Would it be slightly better/nicer to remove_pool_huge_page into a > > list_head under a single lock invocation and then free up the whole lot > > after the lock is dropped? > > Yes, we can certainly do that. > One downside I see is that the list can contain a bunch of pages not > accounted for in hugetlb and not free in buddy (or cma). Ideally, we > would want to keep those in sync if possible. Also, the commit that > added the cond_resched talked about freeing up 12 TB worth of huge pages > and it holding the lock for 150 seconds. The new code is not holding > the lock while calling free to buddy, but I wonder how long it would > take to remove 12 TB worth of huge pages and add them to a separate list? Well, the remove_pool_huge_page is just a accounting part and that should be pretty invisible even when the number of pages is large. The lockless nature (from hugetlb POV) of the final page release is the heavy weight operation and whether you do it in chunks or in a single go (with cond_resched) should be visible either. We already do the same thing when uncharging memcg pages (mem_cgroup_uncharge_list). So I would agree with you that this would be a much bigger problem if both the hugetlb and freeing path were equally heavy weight and the delay between first pages uncaccounted and freed would be noticeable. But I do not want to push for this. I just hated the hugetlb_lock dances as this is ugly and repetitive pattern. -- Michal Hocko SUSE Labs