linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Byungchul Park <byungchul@sk.com>
To: Yeo Reum Yun <YeoReum.Yun@arm.com>
Cc: "kernel_team@skhynix.com" <kernel_team@skhynix.com>,
	"linux-ide@vger.kernel.org" <linux-ide@vger.kernel.org>,
	"kernel-team@lge.com" <kernel-team@lge.com>,
	"open list:MEMORY MANAGEMENT" <linux-mm@kvack.org>,
	"harry.yoo@oracle.com" <harry.yoo@oracle.com>,
	"yskelg@gmail.com" <yskelg@gmail.com>,
	"her0gyugyu@gmail.com" <her0gyugyu@gmail.com>,
	"max.byungchul.park@gmail.com" <max.byungchul.park@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [RFC DEPT v16] Question for dept.
Date: Mon, 2 Jun 2025 11:59:06 +0900	[thread overview]
Message-ID: <20250602025906.GA67804@system.software.com> (raw)
In-Reply-To: <GV1PR08MB10521BCB90DD275E324622DA0FB61A@GV1PR08MB10521.eurprd08.prod.outlook.com>

On Fri, May 30, 2025 at 11:27:48AM +0000, Yeo Reum Yun wrote:
> Hi Byungchul,
> 
> Thanks for your great work for the latest dept patch.
> 
> But I have a some quetions with the below dept log supplied from 
> Yunseong Kim<yskelg@gmail.com>
> 
> ...
> [13304.604203] context A
> [13304.604209]    [S] lock(&uprobe->register_rwsem:0)
> [13304.604217]    [W] __wait_rcu_gp(<sched>:0)
> [13304.604226]    [E] unlock(&uprobe->register_rwsem:0)
> [13304.604234]
> [13304.604239] context B 
> [13304.604244]    [S] lock(event_mutex:0)
> [13304.604252]    [W] lock(&uprobe->register_rwsem:0)
> [13304.604261]    [E] unlock(event_mutex:0)
> [13304.604269]
> [13304.604274] context C
> [13304.604279]    [S] lock(&ctx->mutex:0)
> [13304.604287]    [W] lock(event_mutex:0)
> [13304.604295]    [E] unlock(&ctx->mutex:0)
> [13304.604303]
> [13304.604308] context D
> [13304.604313]    [S] lock(&sig->exec_update_lock:0)
> [13304.604322]    [W] lock(&ctx->mutex:0)
> [13304.604330]    [E] unlock(&sig->exec_update_lock:0)
> [13304.604338]
> [13304.604343] context E
> [13304.604348]    [S] lock(&f->f_pos_lock:0)
> [13304.604356]    [W] lock(&sig->exec_update_lock:0)
> [13304.604365]    [E] unlock(&f->f_pos_lock:0)
> [13304.604373]
> [13304.604378] context F
> [13304.604383]    [S] (unknown)(<sched>:0)
> [13304.604391]    [W] lock(&f->f_pos_lock:0)
> [13304.604399]    [E] try_to_wake_up(<sched>:0)
> [13304.604408]
> [13304.604413] context G
> [13304.604418]    [S] lock(btrfs_trans_num_writers:0)
> [13304.604427]    [W] btrfs_commit_transaction(<sched>:0)
> [13304.604436]    [E] unlock(btrfs_trans_num_writers:0)
> [13304.604445]
> [13304.604449] context H
> [13304.604455]    [S] (unknown)(<sched>:0)
> [13304.604463]    [W] lock(btrfs_trans_num_writers:0)
> [13304.604471]    [E] try_to_wake_up(<sched>:0)
> [13304.604484] context I
> [13304.604490]    [S] (unknown)(<sched>:0)
> [13304.604498]    [W] synchronize_rcu_expedited_wait_once(<sched>:0)
> [13304.604507]    --------------- >8 timeout ---------------
> [13304.604527] context J
> [13304.604533]    [S] (unknown)(<sched>:0)
> [13304.604541]    [W] synchronize_rcu_expedited(<sched>:0)
> [13304.604549]    [E] try_to_wake_up(<sched>:0)

What a long circle!  Dept is working great!

However, this is a false positive that comes from rcu waits that haven't
been classified properly yet, the fix of which is in progress by
Yunseong Kim.  We should wait for him to complete the fix :(

> [end of circular]
> ...
> 
> 1. I wonder how context A could be printed with 
>     [13304.604217]    [W] __wait_rcu_gp(<sched>:0) 
>     since, the completion's dept map will be initailized with 
>        sdt_might_sleep_start_timeout((x)->dmap, -1L);
>    
>     I think last dept_task's stage_sched_map affects this wrong print.

No.  It's working as it should.  Since (x)->dmap is NULL in this case,
it's supposed to print <sched>.

>     Should this be fixed with:
> 
>  @@ -2713,6 +2713,7 @@ void dept_stage_wait(struct dept_map *m, struct dept_key *k,
>         if (m) {
>                 dt->stage_m = *m;
>                 dt->stage_real_m = m;
> +               dt->stage_sched_map = false;

It should already be false since sdt_might_sleep_end() reset this value
to false.  DEPT_WARN_ON(dt->stage_sched_map) in here might make more
sense.

>                 /*
>                  * Ensure dt->stage_m.keys != NULL and it works with the
>     
> 2. Whenever prints the dependency which initalized with sdt_might_sleep_start_timeout() currently it prints
>    (unknown)(<sched>:0) only.
>    Would it much better to print task information? (pid, comm and other).    

Thanks for such a valuable feedback.  I will add it to to-do.

	Byungchul
> 
> Thanks.


      reply	other threads:[~2025-06-02  2:59 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-30 11:27 Yeo Reum Yun
2025-06-02  2:59 ` Byungchul Park [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250602025906.GA67804@system.software.com \
    --to=byungchul@sk.com \
    --cc=YeoReum.Yun@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=harry.yoo@oracle.com \
    --cc=her0gyugyu@gmail.com \
    --cc=kernel-team@lge.com \
    --cc=kernel_team@skhynix.com \
    --cc=linux-ide@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=max.byungchul.park@gmail.com \
    --cc=yskelg@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox