From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Bob Liu <lliubbo@gmail.com>
Cc: Nitin Gupta <ngupta@vflare.org>, Minchan Kim <minchan@kernel.org>,
Luigi Semenzato <semenzato@google.com>,
linux-mm@kvack.org
Subject: RE: zram /proc/swaps accounting weirdness
Date: Tue, 11 Dec 2012 16:22:40 -0800 (PST) [thread overview]
Message-ID: <3a344fcf-c256-42e9-87c3-66cd41d763f9@default> (raw)
In-Reply-To: <CAA_GA1eBR6=vasnoSDYZK9qvYQtzVS9q2CHC3M-qeVRRp1dhPg@mail.gmail.com>
> From: Bob Liu [mailto:lliubbo@gmail.com]
> Subject: Re: zram /proc/swaps accounting weirdness
>
> Hi Dan,
>
> On Sat, Dec 8, 2012 at 7:57 AM, Dan Magenheimer
> <dan.magenheimer@oracle.com> wrote:
> > While playing around with zcache+zram (see separate thread),
> > I was watching stats with "watch -d".
> >
> > It appears from the code that /sys/block/num_writes only
> > increases, never decreases. In my test, num_writes got up
> > to 1863. /sys/block/disksize is 104857600.
> >
> > I have two swap disks, one zram (pri=60), one real (pri=-1),
> > and as a I watched /proc/swaps, the "Used" field grew rapidly
> > and reached the Size (102396k) of the zram swap, and then
> > the second swap disk (a physical disk partition) started being
> > used. Then for awhile, the Used field for both swap devices
> > was changing (up and down).
> >
> > Can you explain how this could happen if num_writes never
> > exceeded 1863? This may be harmless in the case where
> > the only swap on the system is zram; or may indicate a bug
> > somewhere?
> >
>
> Sorry, I didn't get your idea here.
> In my opinion, num_writes is the count of request but not the size.
> I think the total size should be the sum of bio->bi_size,
> so if num_writes is 1863 the actual size may also exceed 102396k.
Hi Bob --
I added some debug code to record total bio_bi_size (and some
other things) to sysfs. No, bio->bi_size appears to always
(or nearly always) be PAGE_SIZE.
Debug patch attached below in case you are interested.
(Applies to 3.7 final.)
> > It looks like num_writes is counting bio's not pages...
> > which would imply the bio's are potentially quite large
> > (and I'll guess they are of size SWAPFILE_CLUSTER which is
> > defined to be 256). Do large clusters make sense with zram?
> >
> > Late on a Friday so sorry if I am incomprehensible...
> >
> > P.S. The corresponding stat for zcache indicates that
> > it failed 8852 stores, so I would have expected zram
> > to deal with no more than 8852 compressions.
diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
index 6edefde..9679b02 100644
--- a/drivers/staging/zram/zram_drv.c
+++ b/drivers/staging/zram/zram_drv.c
@@ -160,7 +160,7 @@ static void zram_free_page(struct zram *zram, size_t index)
zram_stat64_sub(zram, &zram->stats.compr_size,
zram->table[index].size);
- zram_stat_dec(&zram->stats.pages_stored);
+ zram_stat64_sub(zram, &zram->stats.pages_stored, -1);
zram->table[index].handle = 0;
zram->table[index].size = 0;
@@ -371,7 +371,8 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
/* Update stats */
zram_stat64_add(zram, &zram->stats.compr_size, clen);
- zram_stat_inc(&zram->stats.pages_stored);
+ zram_stat64_inc(zram, &zram->stats.pages_stored);
+ zram_stat64_inc(zram, &zram->stats.cum_pages_stored);
if (clen <= PAGE_SIZE / 2)
zram_stat_inc(&zram->stats.good_compress);
@@ -419,6 +420,8 @@ static void __zram_make_request(struct zram *zram, struct bio *bio, int rw)
zram_stat64_inc(zram, &zram->stats.num_reads);
break;
case WRITE:
+ zram_stat64_add(zram, &zram->stats.tot_bio_bi_size,
+ bio->bi_size);
zram_stat64_inc(zram, &zram->stats.num_writes);
break;
}
@@ -428,6 +431,11 @@ static void __zram_make_request(struct zram *zram, struct bio *bio, int rw)
bio_for_each_segment(bvec, bio, i) {
int max_transfer_size = PAGE_SIZE - offset;
+ switch (rw) {
+ case WRITE:
+ zram_stat64_inc(zram, &zram->stats.num_segments);
+ break;
+ }
if (bvec->bv_len > max_transfer_size) {
/*
diff --git a/drivers/staging/zram/zram_drv.h b/drivers/staging/zram/zram_drv.h
index 572c0b1..c40fe50 100644
--- a/drivers/staging/zram/zram_drv.h
+++ b/drivers/staging/zram/zram_drv.h
@@ -76,12 +76,15 @@ struct zram_stats {
u64 compr_size; /* compressed size of pages stored */
u64 num_reads; /* failed + successful */
u64 num_writes; /* --do-- */
+ u64 tot_bio_bi_size; /* --do-- */
+ u64 num_segments; /* --do-- */
u64 failed_reads; /* should NEVER! happen */
u64 failed_writes; /* can happen when memory is too low */
u64 invalid_io; /* non-page-aligned I/O requests */
u64 notify_free; /* no. of swap slot free notifications */
u32 pages_zero; /* no. of zero filled pages */
- u32 pages_stored; /* no. of pages currently stored */
+ u64 pages_stored; /* no. of pages currently stored */
+ u64 cum_pages_stored; /* pages cumulatively stored */
u32 good_compress; /* % of pages with compression ratio<=50% */
u32 bad_compress; /* % of pages with compression ratio>=75% */
};
diff --git a/drivers/staging/zram/zram_sysfs.c b/drivers/staging/zram/zram_sysfs.c
index edb0ed4..2df62d4 100644
--- a/drivers/staging/zram/zram_sysfs.c
+++ b/drivers/staging/zram/zram_sysfs.c
@@ -136,6 +136,42 @@ static ssize_t num_writes_show(struct device *dev,
zram_stat64_read(zram, &zram->stats.num_writes));
}
+static ssize_t tot_bio_bi_size_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct zram *zram = dev_to_zram(dev);
+
+ return sprintf(buf, "%llu\n",
+ zram_stat64_read(zram, &zram->stats.tot_bio_bi_size));
+}
+
+static ssize_t num_segments_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct zram *zram = dev_to_zram(dev);
+
+ return sprintf(buf, "%llu\n",
+ zram_stat64_read(zram, &zram->stats.num_segments));
+}
+
+static ssize_t pages_stored_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct zram *zram = dev_to_zram(dev);
+
+ return sprintf(buf, "%llu\n",
+ zram_stat64_read(zram, &zram->stats.pages_stored));
+}
+
+static ssize_t cum_pages_stored_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct zram *zram = dev_to_zram(dev);
+
+ return sprintf(buf, "%llu\n",
+ zram_stat64_read(zram, &zram->stats.cum_pages_stored));
+}
+
static ssize_t invalid_io_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@@ -198,6 +234,10 @@ static DEVICE_ATTR(initstate, S_IRUGO, initstate_show, NULL);
static DEVICE_ATTR(reset, S_IWUSR, NULL, reset_store);
static DEVICE_ATTR(num_reads, S_IRUGO, num_reads_show, NULL);
static DEVICE_ATTR(num_writes, S_IRUGO, num_writes_show, NULL);
+static DEVICE_ATTR(tot_bio_bi_size, S_IRUGO, tot_bio_bi_size_show, NULL);
+static DEVICE_ATTR(num_segments, S_IRUGO, num_segments_show, NULL);
+static DEVICE_ATTR(pages_stored, S_IRUGO, pages_stored_show, NULL);
+static DEVICE_ATTR(cum_pages_stored, S_IRUGO, cum_pages_stored_show, NULL);
static DEVICE_ATTR(invalid_io, S_IRUGO, invalid_io_show, NULL);
static DEVICE_ATTR(notify_free, S_IRUGO, notify_free_show, NULL);
static DEVICE_ATTR(zero_pages, S_IRUGO, zero_pages_show, NULL);
@@ -211,6 +251,10 @@ static struct attribute *zram_disk_attrs[] = {
&dev_attr_reset.attr,
&dev_attr_num_reads.attr,
&dev_attr_num_writes.attr,
+ &dev_attr_tot_bio_bi_size.attr,
+ &dev_attr_num_segments.attr,
+ &dev_attr_pages_stored.attr,
+ &dev_attr_cum_pages_stored.attr,
&dev_attr_invalid_io.attr,
&dev_attr_notify_free.attr,
&dev_attr_zero_pages.attr,
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-12-12 0:22 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-12-07 23:57 Dan Magenheimer
2012-12-10 3:15 ` Bob Liu
2012-12-12 0:22 ` Dan Magenheimer [this message]
2012-12-11 6:26 ` Minchan Kim
2012-12-12 0:34 ` Dan Magenheimer
2012-12-12 1:12 ` Dan Magenheimer
2013-02-17 1:52 ` Simon Jeons
2013-02-18 17:54 ` Dan Magenheimer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3a344fcf-c256-42e9-87c3-66cd41d763f9@default \
--to=dan.magenheimer@oracle.com \
--cc=linux-mm@kvack.org \
--cc=lliubbo@gmail.com \
--cc=minchan@kernel.org \
--cc=ngupta@vflare.org \
--cc=semenzato@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox