linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Chengming Zhou <chengming.zhou@linux.dev>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Yosry Ahmed <yosryahmed@google.com>,
	Erhard Furtner <erhard_f@mailbox.org>,
	Yu Zhao <yuzhao@google.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	Johannes Weiner <hannes@cmpxchg.org>,
	Nhat Pham <nphamcs@gmail.com>, Minchan Kim <minchan@kernel.org>,
	"Vlastimil Babka (SUSE)" <vbabka@kernel.org>
Subject: Re: kswapd0: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0 (Kernel v6.5.9, 32bit ppc)
Date: Thu, 6 Jun 2024 13:55:50 +0800	[thread overview]
Message-ID: <f92e6d70-32e3-4f45-8fe8-0b7af7a14bc6@linux.dev> (raw)
In-Reply-To: <20240606054334.GD11718@google.com>

On 2024/6/6 13:43, Sergey Senozhatsky wrote:
> On (24/06/06 12:46), Chengming Zhou wrote:
>>>> Agree, I think we should try to improve locking scalability of zsmalloc.
>>>> I have some thoughts to share, no code or test data yet:
>>>>
>>>> 1. First, we can change the pool global lock to per-class lock, which
>>>>    is more fine-grained.
>>>
>>> Commit c0547d0b6a4b6 "zsmalloc: consolidate zs_pool's migrate_lock
>>> and size_class's locks" [1] claimed no significant difference
>>> between class->lock and pool->lock.
>>
>> Ok, I haven't looked into the history much, that seems preparation of trying
>> to introduce reclaim in the zsmalloc? Not sure. But now with the reclaim code
>> in zsmalloc has gone, should we change back to the per-class lock? Which is
> 
> Well, the point that commit made was that Nhat (and Johannes?) were
> unable to detect any impact of pool->lock on a variety of cases.  So
> we went on with code simplification.

Right, the code is simpler.

> 
>> obviously more fine-grained than the pool lock. Actually, I have just done it,
>> will test to get some data later.
> 
> Thanks, we'll need data on this.  I'm happy to take the patch, but
> jumping back and forth between class->lock and pool->lock merely
> "for obvious reasons" is not what I'm extremely excited about.

Yeah, agree, we need test data.


  reply	other threads:[~2024-06-06  5:56 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-08 18:21 Erhard Furtner
2024-05-15 20:45 ` Erhard Furtner
2024-05-15 22:06   ` Yu Zhao
2024-06-01  6:01     ` Yu Zhao
2024-06-01 15:37       ` David Hildenbrand
2024-06-06  3:11         ` Michael Ellerman
2024-06-06  3:38           ` Yu Zhao
2024-06-06 12:08             ` Michael Ellerman
2024-06-06 16:05               ` Erhard Furtner
2024-06-02 18:03       ` Erhard Furtner
2024-06-02 20:38         ` Yu Zhao
2024-06-02 21:36           ` Erhard Furtner
2024-06-03 22:13         ` Erhard Furtner
2024-06-03 23:24           ` Yosry Ahmed
     [not found]             ` <20240604134458.3ae4396a@yea>
2024-06-04 16:11               ` Yosry Ahmed
2024-06-04 17:18                 ` Yu Zhao
2024-06-04 17:34                   ` Yosry Ahmed
2024-06-04 17:53                     ` Yu Zhao
2024-06-04 18:01                       ` Yosry Ahmed
2024-06-04 21:00                         ` Vlastimil Babka (SUSE)
2024-06-04 21:10                         ` Erhard Furtner
2024-06-05  3:03                           ` Yosry Ahmed
2024-06-05 23:04                             ` Erhard Furtner
2024-06-05 23:41                               ` Yosry Ahmed
2024-06-05 23:52                                 ` Yu Zhao
2024-06-05 23:58                                   ` Yosry Ahmed
2024-06-06 13:28                                     ` Erhard Furtner
2024-06-06 16:42                                       ` Yosry Ahmed
2024-06-06  2:49                                 ` Chengming Zhou
2024-06-06  4:31                                   ` Sergey Senozhatsky
2024-06-06  4:46                                     ` Chengming Zhou
2024-06-06  5:43                                       ` Sergey Senozhatsky
2024-06-06  5:55                                         ` Chengming Zhou [this message]
2024-06-07  9:40                                         ` Nhat Pham
2024-06-07 11:20                                           ` Sergey Senozhatsky
2024-06-06  7:24                                 ` Vlastimil Babka (SUSE)
2024-06-06 13:32                                   ` Erhard Furtner
2024-06-06 16:53                                     ` Vlastimil Babka (SUSE)
2024-06-06 17:14                                 ` Takero Funaki
2024-06-06 17:41                                   ` Yosry Ahmed
2024-06-06 17:55                                     ` Yu Zhao
2024-06-06 18:03                                       ` Yosry Ahmed
2024-06-04 22:17                   ` Erhard Furtner
2024-06-04 20:52             ` Vlastimil Babka (SUSE)
2024-06-04 20:55               ` Yosry Ahmed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f92e6d70-32e3-4f45-8fe8-0b7af7a14bc6@linux.dev \
    --to=chengming.zhou@linux.dev \
    --cc=erhard_f@mailbox.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=minchan@kernel.org \
    --cc=nphamcs@gmail.com \
    --cc=senozhatsky@chromium.org \
    --cc=vbabka@kernel.org \
    --cc=yosryahmed@google.com \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox