From: Nhat Pham <nphamcs@gmail.com>
To: Ronald Monthero <debug.penguin32@gmail.com>
Cc: sjenning@redhat.com, akpm@linux-foundation.org,
Dan Streetman <ddstreet@ieee.org>,
Vitaly Wool <vitaly.wool@konsulko.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Chris Li <chrisl@kernel.org>
Subject: Re: [PATCH] mm/zswap: Improve with alloc_workqueue() call
Date: Tue, 19 Dec 2023 16:21:13 -0800 [thread overview]
Message-ID: <CAKEwX=NLe-N6dLvOVErPSL3Vfw6wqHgcUBQoNRLeWkN6chdvLQ@mail.gmail.com> (raw)
In-Reply-To: <CALk6Uxp_vKh2y-Fjh5=0gP_gmBZTLuJ8T0CLAFedRp79zaJikQ@mail.gmail.com>
On Fri, Dec 15, 2023 at 1:04 AM Ronald Monthero
<debug.penguin32@gmail.com> wrote:
>
> On Thu, Dec 14, 2023 at 10:28 AM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> ..
> < snipped >
>
> > I should have been clearer. I'm not against the change per-se - I
> > agree that we should replace create_workqueue() with
> > alloc_workqueue(). What I meant was, IIUC, there are two behavioral
> > changes with this new workqueue creation:
> >
> > a) We're replacing a bounded workqueue (which as you noted, is fixed
> > by create_workqueue()) with an unbounded one (WQ_UNBOUND). This seems
> > fine to me - I doubt locality buys us much here.
>
> Yes the workqueue attribute change per se but the existing
> functionality remains seamless and the change is more a forward
> looking change. imo under a memory pressure scenario an unbound
> workqueue might workaround the scenario better as the number of
> backing pools is dynamic. And with the WQ_UNBOUND attribute the
> scheduler is more open to exercise some improvisations in any
> demanding scenarios for offloading cpu time slicing for workers, ie if
> any other worker of the same primary cpu had to be served due to
> workers with WQ_HIGHPRI and WQ_CPU_INTENSIVE.Although normal and
> highpri worker-pools don't interact with each other, the target cpu
> atleast need not be the same if our worker for zswap is WQ_UNBOUND.
I don't object to the change per-se. I just meant that these
changes/discussion should be spelled out in the patch's changelog :)
IMHO, we should document behavior changes if there are any. For
instance, when we switched to kmap_local_page() for zswap, there is a
discussion in the changelog regarding how it differs from the old (and
deprecated) kmap_atomic():
https://lore.kernel.org/linux-mm/20231127160058.586446-1-fabio.maria.de.francesco@linux.intel.com/
and how that difference doesn't matter in the case of zswap.
>
> Also noting that the existing wq of zwap worker has the WQ_MEM_RECLAIM
> attribute, so is there a rescue worker for zswap during a memory
> pressure scenario ?
> Quoting: "All work items which might be used on code paths that handle
> memory reclaim are required to be queued on wq's that have a
> rescue-worker reserved for execution under memory pressure. Else it is
> possible that the worker-pool deadlocks waiting for execution contexts
> to free up"
Seems like it, but this behavior is not changed by your patch :) So
i'm not too concerned by it one way or another.
>
> Also additional thought if adding WQ_FREEZABLE attribute while
> creating the zswap worker make sense in scenarios to handle freeze and
> unfreeze of specific cgroups or file system wide freeze and unfreeze
> scenarios ? Does zswap worker participate in freeze/unfreeze code path
> scenarios ?
I don't think so, no? This zswap worker is scheduled upon the zswap
pool limit hit (which happens on the zswap store/swapping/memory
reclaim) path.
>
> > b) create_workqueue() limits the number of concurrent per-cpu
> > execution contexts at 1 (i.e only one single global reclaimer),
> > whereas after this patch this is set to the default value. This seems
> > fine to me too - I don't remember us taking advantage of the previous
> > concurrency limitation. Also, in practice, the task_struct is
> > one-to-one with the zswap_pool's anyway, and most of the time, there
> > is just a single pool being used. (But it begs the question - what's
> > the point of using 0 instead of 1 here?)
>
> Nothing in particular but I left it at default 0, which can go upto
> 256 ( @maxactive per cpu).
> But if zswap worker is always intended to only have 1 active worker per cpu,
> then that's fine with 1, otherwise a default setting might be flexible
> for scaling.
> just a thought, does having a higher value help for larger memory systems ?
I don't think having higher value helps here tbh. We only have one
work_struct per pool, so it shouldn't make a difference either way :)
>
> > Both seem fine (to me anyway - other reviewers feel free to take a
> > look and fact-check everything). I just feel like this should be
> > explicitly noted in the changelog, IMHO, in case we are mistaken and
> > need to revisit this :) Either way, not a NACK from me.
>
> Thanks Nhat, for checking. Above are my thoughts, I could be missing
> some info or incorrect
> on certain fronts so I would seek clarifications.
> Also thanks in advance to other experts/maintainers, please share
> feedback and suggestions.
>
> BR,
> ronald
Also +Chris Li, who is also working on improving zswap :)
next prev parent reply other threads:[~2023-12-20 0:21 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-11 5:28 Ronald Monthero
2023-12-11 14:15 ` Nhat Pham
2023-12-13 13:20 ` Ronald Monthero
2023-12-14 0:28 ` Nhat Pham
2023-12-14 1:02 ` Nhat Pham
2023-12-15 9:03 ` Ronald Monthero
2023-12-20 0:21 ` Nhat Pham [this message]
2024-01-16 13:31 ` Ronald Monthero
2024-01-17 19:13 ` Nhat Pham
2024-01-17 19:30 ` Yosry Ahmed
2024-01-18 16:16 ` Johannes Weiner
2024-01-18 16:48 ` Johannes Weiner
2024-01-18 17:03 ` Yosry Ahmed
2024-01-18 18:08 ` Nhat Pham
2024-01-18 17:06 ` Yosry Ahmed
2024-01-18 17:39 ` Johannes Weiner
2024-01-18 18:03 ` Yosry Ahmed
2024-01-18 18:32 ` Nhat Pham
2024-02-21 13:32 ` Ronald Monthero
2024-01-18 18:03 ` Nhat Pham
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAKEwX=NLe-N6dLvOVErPSL3Vfw6wqHgcUBQoNRLeWkN6chdvLQ@mail.gmail.com' \
--to=nphamcs@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=chrisl@kernel.org \
--cc=ddstreet@ieee.org \
--cc=debug.penguin32@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=sjenning@redhat.com \
--cc=vitaly.wool@konsulko.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox