From: sagig <sagig@mellanox.com>
To: aarcange@redhat.com
Cc: gleb@redhat.com, oren@mellanox.com, ogerlitz@mellanox.com,
linux-mm@kvack.org
Subject: Re: [PATCH RFC] mm: convert rcu_read_lock() to srcu_read_lock(), thus allowing to sleep in callbacks
Date: Sun, 5 Feb 2012 12:22:11 +0200 [thread overview]
Message-ID: <4F2E5853.2060605@mellanox.com> (raw)
In-Reply-To: <4f25649b.8253b40a.3800.319d@mx.google.com>
Hey all,
I've published this patch [requested for comments] last week, But got no
responses.
Since I'm not sure what to do if init_srcu_struct() call fails (it
might due to memory pressure), I'm interested in the community's advice
on how to act.
Thanks,
On 1/29/2012 5:23 PM, sagig@mellanox.com wrote:
> Callbacks: invalidate_page, invalidate_range_start/end, change_pte
> Now that anon_vma lock and i_mmap_mutex are both sleepable mutex, it is possible to schedule inside invalidation cllabacks
> This is essential for a scheduling HW sync in RDMA drivers which apply on demand paging methods
>
> Signed-off-by: sagi grimberg<sagig@mellanox.co.il>
> ---
> mm/mmu_notifier.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++------
> 1 files changed, 55 insertions(+), 8 deletions(-)
>
> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> index 9a611d3..70dadd5 100644
> --- a/mm/mmu_notifier.c
> +++ b/mm/mmu_notifier.c
> @@ -123,10 +123,16 @@ int __mmu_notifier_test_young(struct mm_struct *mm,
> void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
> pte_t pte)
> {
> + int idx = -1;
> + struct srcu_struct srcu;
> struct mmu_notifier *mn;
> struct hlist_node *n;
>
> - rcu_read_lock();
> + if (init_srcu_struct(&srcu))
> + rcu_read_lock();
> + else
> + idx = srcu_read_lock(&srcu);
> +
> hlist_for_each_entry_rcu(mn, n,&mm->mmu_notifier_mm->list, hlist) {
> if (mn->ops->change_pte)
> mn->ops->change_pte(mn, mm, address, pte);
> @@ -137,49 +143,90 @@ void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
> else if (mn->ops->invalidate_page)
> mn->ops->invalidate_page(mn, mm, address);
> }
> - rcu_read_unlock();
> +
> + if (idx< 0)
> + rcu_read_unlock();
> + else
> + srcu_read_unlock(&srcu, idx);
> +
> + cleanup_srcu_struct(&srcu);
> }
>
> void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> unsigned long address)
> {
> + int idx = -1;
> + struct srcu_struct srcu;
> struct mmu_notifier *mn;
> struct hlist_node *n;
>
> - rcu_read_lock();
> + if (init_srcu_struct(&srcu))
> + rcu_read_lock();
> + else
> + idx = srcu_read_lock(&srcu);
> +
> hlist_for_each_entry_rcu(mn, n,&mm->mmu_notifier_mm->list, hlist) {
> if (mn->ops->invalidate_page)
> mn->ops->invalidate_page(mn, mm, address);
> }
> - rcu_read_unlock();
> +
> + if (idx< 0)
> + rcu_read_unlock();
> + else
> + srcu_read_unlock(&srcu, idx);
> +
> + cleanup_srcu_struct(&srcu);
> }
>
> void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> unsigned long start, unsigned long end)
> {
> + int idx = -1;
> + struct srcu_struct srcu;
> struct mmu_notifier *mn;
> struct hlist_node *n;
>
> - rcu_read_lock();
> + if (init_srcu_struct(&srcu))
> + rcu_read_lock();
> + else
> + idx = srcu_read_lock(&srcu);
> +
> hlist_for_each_entry_rcu(mn, n,&mm->mmu_notifier_mm->list, hlist) {
> if (mn->ops->invalidate_range_start)
> mn->ops->invalidate_range_start(mn, mm, start, end);
> }
> - rcu_read_unlock();
> +
> + if (idx< 0)
> + rcu_read_unlock();
> + else
> + srcu_read_unlock(&srcu, idx);
> +
> + cleanup_srcu_struct(&srcu);
> }
>
> void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> unsigned long start, unsigned long end)
> {
> + int idx = -1;
> + struct srcu_struct srcu;
> struct mmu_notifier *mn;
> struct hlist_node *n;
>
> - rcu_read_lock();
> + if (init_srcu_struct(&srcu))
> + rcu_read_lock();
> + else
> + idx = srcu_read_lock(&srcu);
> hlist_for_each_entry_rcu(mn, n,&mm->mmu_notifier_mm->list, hlist) {
> if (mn->ops->invalidate_range_end)
> mn->ops->invalidate_range_end(mn, mm, start, end);
> }
> - rcu_read_unlock();
> +
> + if (idx< 0)
> + rcu_read_unlock();
> + else
> + srcu_read_unlock(&srcu, idx);
> +
> + cleanup_srcu_struct(&srcu);
> }
>
> static int do_mmu_notifier_register(struct mmu_notifier *mn,
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-02-05 10:25 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-01-29 15:23 sagig
2012-02-05 10:22 ` sagig [this message]
2012-02-05 10:52 ` Konstantin Khlebnikov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F2E5853.2060605@mellanox.com \
--to=sagig@mellanox.com \
--cc=aarcange@redhat.com \
--cc=gleb@redhat.com \
--cc=linux-mm@kvack.org \
--cc=ogerlitz@mellanox.com \
--cc=oren@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox