From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEE81C2B9F4 for ; Fri, 18 Jun 2021 02:53:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4BD0061241 for ; Fri, 18 Jun 2021 02:53:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4BD0061241 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E4F986B0071; Thu, 17 Jun 2021 22:53:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E00A56B0072; Thu, 17 Jun 2021 22:53:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC8326B0073; Thu, 17 Jun 2021 22:53:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id 9DB806B0071 for ; Thu, 17 Jun 2021 22:53:06 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 363CB8249980 for ; Fri, 18 Jun 2021 02:53:06 +0000 (UTC) X-FDA: 78265322772.05.DDAF372 Received: from mail-io1-f46.google.com (mail-io1-f46.google.com [209.85.166.46]) by imf15.hostedemail.com (Postfix) with ESMTP id C248EA000247 for ; Fri, 18 Jun 2021 02:53:05 +0000 (UTC) Received: by mail-io1-f46.google.com with SMTP id l64so5443884ioa.7 for ; Thu, 17 Jun 2021 19:53:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=SDWeQISHJ4j5pHR3hlooWBcjy0lmkm8xWsM2aTeQjac=; b=QUnkrwjQyTW+CJk6H3oBeth+kvcq/x2g1pM2SjrH8uyZsyUE2lvj+fNm5Z6uh4+qqm PL2tbS0B3X7ioazE4168t2jSA2CfMGHxBkGFFplO9dqACtKbeHjzV6mjQ10lFWb22P6z QtZGu9aU56SKtapOHyaPCTHMDjlQqnKKIiEwkNR+tz70yayKpbUVwTlek3Zrg/4IclHO bZoGSTq8KvozlgUrQDH4YNHpON/8y8lj52c+s0lPLp3eszDLBw6WG9afriaelZNyMHs7 lrb3K0eNdRv2qsB3TO/8tvbSSkHJLvwrttCD0Ummeq/0pMqSD6GBLMOGDTp7JFWkM62g SNxA== X-Gm-Message-State: AOAM532yDmIaPXeV2TDQUxPEHL08EBml3pySCIHGcLevLUzY0MvigN7H VybSLtBxj6/Igh5YytqLSFg= X-Google-Smtp-Source: ABdhPJwJQHFhl4GJ+Max18sHnW1XiMYPvOP8W1h5MMA+wQ2c2NSucJh/dr8YLKj4CsHZdI5+xZmu1Q== X-Received: by 2002:a6b:7c07:: with SMTP id m7mr1469288iok.177.1623984785393; Thu, 17 Jun 2021 19:53:05 -0700 (PDT) Received: from google.com (243.199.238.35.bc.googleusercontent.com. [35.238.199.243]) by smtp.gmail.com with ESMTPSA id v18sm3791164iom.5.2021.06.17.19.53.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Jun 2021 19:53:05 -0700 (PDT) Date: Fri, 18 Jun 2021 02:53:03 +0000 From: Dennis Zhou To: Roman Gushchin Cc: Tejun Heo , Christoph Lameter , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] percpu: optimize locking in pcpu_balance_workfn() Message-ID: References: <20210617190322.3636731-1-guro@fb.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210617190322.3636731-1-guro@fb.com> Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of dennisszhou@gmail.com designates 209.85.166.46 as permitted sender) smtp.mailfrom=dennisszhou@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C248EA000247 X-Stat-Signature: cbiufu3qam6j8ihxttrm7mxurrtxwmpy X-HE-Tag: 1623984785-647405 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hello, On Thu, Jun 17, 2021 at 12:03:22PM -0700, Roman Gushchin wrote: > pcpu_balance_workfn() unconditionally calls pcpu_balance_free(), > pcpu_reclaim_populated(), pcpu_balance_populated() and > pcpu_balance_free() again. > > Each call to pcpu_balance_free() and pcpu_reclaim_populated() will > cause at least one acquisition of the pcpu_lock. So even if the > balancing was scheduled because of a failed atomic allocation, > pcpu_lock will be acquired at least 4 times. This obviously > increases the contention on the pcpu_lock. > > To optimize the scheme let's grab the pcpu_lock on the upper level > (in pcpu_balance_workfn()) and keep it generally locked for the whole > duration of the scheduled work, but release conditionally to perform > any slow operations like chunk (de)population and creation of new > chunks. > > Signed-off-by: Roman Gushchin > --- > mm/percpu.c | 41 +++++++++++++++++++++++++++++------------ > 1 file changed, 29 insertions(+), 12 deletions(-) > > diff --git a/mm/percpu.c b/mm/percpu.c > index e7b9ca82e9aa..deee7e5bb255 100644 > --- a/mm/percpu.c > +++ b/mm/percpu.c > @@ -1980,6 +1980,9 @@ void __percpu *__alloc_reserved_percpu(size_t size, size_t align) > * If empty_only is %false, reclaim all fully free chunks regardless of the > * number of populated pages. Otherwise, only reclaim chunks that have no > * populated pages. > + * > + * CONTEXT: > + * pcpu_lock (can be dropped temporarily) > */ > static void pcpu_balance_free(bool empty_only) > { > @@ -1987,12 +1990,12 @@ static void pcpu_balance_free(bool empty_only) > struct list_head *free_head = &pcpu_chunk_lists[pcpu_free_slot]; > struct pcpu_chunk *chunk, *next; > > + lockdep_assert_held(&pcpu_lock); > + > /* > * There's no reason to keep around multiple unused chunks and VM > * areas can be scarce. Destroy all free chunks except for one. > */ > - spin_lock_irq(&pcpu_lock); > - > list_for_each_entry_safe(chunk, next, free_head, list) { > WARN_ON(chunk->immutable); > > @@ -2004,8 +2007,10 @@ static void pcpu_balance_free(bool empty_only) > list_move(&chunk->list, &to_free); > } > > - spin_unlock_irq(&pcpu_lock); > + if (list_empty(&to_free)) > + return; > > + spin_unlock_irq(&pcpu_lock); > list_for_each_entry_safe(chunk, next, &to_free, list) { > unsigned int rs, re; > > @@ -2019,6 +2024,7 @@ static void pcpu_balance_free(bool empty_only) > pcpu_destroy_chunk(chunk); > cond_resched(); > } > + spin_lock_irq(&pcpu_lock); > } > > /** > @@ -2029,6 +2035,9 @@ static void pcpu_balance_free(bool empty_only) > * OOM killer to be triggered. We should avoid doing so until an actual > * allocation causes the failure as it is possible that requests can be > * serviced from already backed regions. > + * > + * CONTEXT: > + * pcpu_lock (can be dropped temporarily) > */ > static void pcpu_balance_populated(void) > { > @@ -2037,6 +2046,8 @@ static void pcpu_balance_populated(void) > struct pcpu_chunk *chunk; > int slot, nr_to_pop, ret; > > + lockdep_assert_held(&pcpu_lock); > + > /* > * Ensure there are certain number of free populated pages for > * atomic allocs. Fill up from the most packed so that atomic > @@ -2064,13 +2075,11 @@ static void pcpu_balance_populated(void) > if (!nr_to_pop) > break; > > - spin_lock_irq(&pcpu_lock); > list_for_each_entry(chunk, &pcpu_chunk_lists[slot], list) { > nr_unpop = chunk->nr_pages - chunk->nr_populated; > if (nr_unpop) > break; > } > - spin_unlock_irq(&pcpu_lock); > > if (!nr_unpop) > continue; > @@ -2080,12 +2089,13 @@ static void pcpu_balance_populated(void) > chunk->nr_pages) { > int nr = min_t(int, re - rs, nr_to_pop); > > + spin_unlock_irq(&pcpu_lock); > ret = pcpu_populate_chunk(chunk, rs, rs + nr, gfp); > + cond_resched(); > + spin_lock_irq(&pcpu_lock); > if (!ret) { > nr_to_pop -= nr; > - spin_lock_irq(&pcpu_lock); > pcpu_chunk_populated(chunk, rs, rs + nr); > - spin_unlock_irq(&pcpu_lock); > } else { > nr_to_pop = 0; > } > @@ -2097,11 +2107,12 @@ static void pcpu_balance_populated(void) > > if (nr_to_pop) { > /* ran out of chunks to populate, create a new one and retry */ > + spin_unlock_irq(&pcpu_lock); > chunk = pcpu_create_chunk(gfp); > + cond_resched(); > + spin_lock_irq(&pcpu_lock); > if (chunk) { > - spin_lock_irq(&pcpu_lock); > pcpu_chunk_relocate(chunk, -1); > - spin_unlock_irq(&pcpu_lock); > goto retry_pop; > } > } > @@ -2117,6 +2128,10 @@ static void pcpu_balance_populated(void) > * populated pages threshold, reintegrate the chunk if it has empty free pages. > * Each chunk is scanned in the reverse order to keep populated pages close to > * the beginning of the chunk. > + * > + * CONTEXT: > + * pcpu_lock (can be dropped temporarily) > + * > */ > static void pcpu_reclaim_populated(void) > { > @@ -2124,7 +2139,7 @@ static void pcpu_reclaim_populated(void) > struct pcpu_block_md *block; > int i, end; > > - spin_lock_irq(&pcpu_lock); > + lockdep_assert_held(&pcpu_lock); > > restart: > /* > @@ -2190,8 +2205,6 @@ static void pcpu_reclaim_populated(void) > list_move(&chunk->list, > &pcpu_chunk_lists[pcpu_sidelined_slot]); > } > - > - spin_unlock_irq(&pcpu_lock); > } > > /** > @@ -2212,10 +2225,14 @@ static void pcpu_balance_workfn(struct work_struct *work) > * appropriate. > */ > mutex_lock(&pcpu_alloc_mutex); > + spin_lock_irq(&pcpu_lock); > + > pcpu_balance_free(false); > pcpu_reclaim_populated(); > pcpu_balance_populated(); > pcpu_balance_free(true); > + > + spin_unlock_irq(&pcpu_lock); > mutex_unlock(&pcpu_alloc_mutex); > } > > -- > 2.31.1 > I've applied this to for-5.14. Thanks, Dennis