linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Miklos Szeredi <miklos@szeredi.hu>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	akpm@linux-foundation.org, neilb@suse.de, dgc@sgi.com,
	tomoki.sekiyama.qu@hitachi.com
Subject: Re: [PATCH 6/6] mm: per device dirty threshold
Date: Wed, 04 Apr 2007 13:01:01 +0200	[thread overview]
Message-ID: <1175684461.6483.64.camel@twins> (raw)
In-Reply-To: <E1HZ2kU-0005xx-00@dorka.pomaz.szeredi.hu>

On Wed, 2007-04-04 at 12:29 +0200, Miklos Szeredi wrote:
> > > I'm worried about two things:
> > > 
> > > 1) If the per-bdi threshold becomes smaller than the granularity of
> > >    the per-bdi stat (due to the per-CPU counters), then things will
> > >    break.  Shouldn't there be some sanity checking for the calculated
> > >    threshold?
> > 
> > I'm not sure what you're referring to.
> > 
> > void get_writeout_scale(struct backing_dev_info *bdi, int *scale, int *div)
> > {
> >         int bits = vm_cycle_shift - 1;
> >         unsigned long total = __global_bdi_stat(BDI_WRITEOUT_TOTAL);
> >         unsigned long cycle = 1UL << bits;
> >         unsigned long mask = cycle - 1;
> > 
> >         if (bdi_cap_writeback_dirty(bdi)) {
> >                 bdi_writeout_norm(bdi);
> >                 *scale = __bdi_stat(bdi, BDI_WRITEOUT);
> >         } else
> >                 *scale = 0;
> > 
> >         *div = cycle + (total & mask);
> > }
> > 
> > where cycle ~ vm_total_pages
> > scale can be a tad off due to overstep here:
> > 
> > void __inc_bdi_stat(struct backing_dev_info *bdi, enum bdi_stat_item item)
> > {
> >         struct bdi_per_cpu_data *pcd = &bdi->pcd[smp_processor_id()];
> >         s8 *p = pcd->bdi_stat_diff + item;
> > 
> >         (*p)++;
> > 
> >         if (unlikely(*p > pcd->stat_threshold)) {
> >                 int overstep = pcd->stat_threshold / 2;
> > 
> >                 bdi_stat_add(*p + overstep, bdi, item);
> >                 *p = -overstep;
> >         }
> > }
> > 
> > so it could be that: scale / cycle > 1
> > by a very small amount; however:
> 
> No, I'm worried about the case when scale is too small.  If the
> per-bdi threshold becomes smaller than stat_threshold, then things
> won't work, because dirty+writeback will never go below the threshold,
> possibly resulting in the deadlock we are trying to avoid.

/me goes refresh the deadlock details..

A writes to B; A exceeds the dirty limit but writeout is blocked by B
because the dirty limit is exceeded, right?

This cannot happen when we decouple the BDI dirty thresholds, even when
a threshold is 0.

A write to B; A exceeds A's limit and writes to B, B has limit of 0, the
1 dirty page gets written out (we gain ratio) and life goes on.

Right?

> BTW, the second argument of get_dirty_limits() doesn't seem to get
> used by the caller, or does it?

Correct, there are currently no in-tree users left. However I do use it
in a debug patch that shows bdi_dirty of total_dirty. We could remove
it, I have no strong feelings on it, I thought it might still be useful
for reporting or something.

> > > 2) The loop is sleeping in congestion_wait(WRITE), which seems wrong.
> > >    It may well be possible that none of the queues are congested, so
> > >    it will sleep the full .1 second.  But by that time the queue may
> > >    have become idle and is just sitting there doing nothing.  Maybe
> > >    there should be a per-bdi waitq, that is woken up, when the per-bdi
> > >    stats are updated.
> > 
> > Good point, .1 seconds is a lot of time.
> > 
> > I'll cook up something like that if nobody beats me to it :-)
> 
> I realized, that it's maybe worth storing last the threshold in the
> bdi as well, so that balance_dirty_pages() doesn't get woken up too
> many times unnecessarilty.  But I don't know...

There is already a ratelimit somewhere, but I've heard it suggested to
remove that....

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2007-04-04 11:01 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-04-03 14:40 [PATCH 0/6] per device dirty throttling -V2 Peter Zijlstra
2007-04-03 14:40 ` [PATCH 1/6] mm: scalable bdi statistics counters Peter Zijlstra
2007-04-04  9:20   ` Miklos Szeredi
2007-04-04  9:25     ` Peter Zijlstra
2007-04-03 14:40 ` [PATCH 2/6] mm: count dirty pages per BDI Peter Zijlstra
2007-04-03 14:40 ` [PATCH 3/6] mm: count writeback " Peter Zijlstra
2007-04-03 14:40 ` [PATCH 4/6] mm: count unstable " Peter Zijlstra
2007-04-03 14:40 ` [PATCH 5/6] mm: expose BDI statistics in sysfs Peter Zijlstra
2007-04-03 14:40 ` [PATCH 6/6] mm: per device dirty threshold Peter Zijlstra
2007-04-04  9:34   ` Miklos Szeredi
2007-04-04 10:16     ` Peter Zijlstra
2007-04-04 10:29       ` Miklos Szeredi
2007-04-04 11:01         ` Peter Zijlstra [this message]
2007-04-04 11:12           ` Miklos Szeredi
2007-04-04 12:05             ` Peter Zijlstra
2007-04-04 12:32               ` Miklos Szeredi
2007-04-04 12:43                 ` Peter Zijlstra
2007-04-04 20:03                   ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1175684461.6483.64.camel@twins \
    --to=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=dgc@sgi.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=miklos@szeredi.hu \
    --cc=neilb@suse.de \
    --cc=tomoki.sekiyama.qu@hitachi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox