linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Aaron Lu <aaron.lu@linux.alibaba.com>
Cc: linux-mm@kvack.org, Huang Ying <ying.huang@intel.com>,
	Hugh Dickins <hughd@google.com>
Subject: Re: [PATCH] mm, swap: use rbtree for swap_extent
Date: Thu, 23 May 2019 12:00:35 -0700	[thread overview]
Message-ID: <20190523120035.efb7c3bf4c91e3aef255621c@linux-foundation.org> (raw)
In-Reply-To: <20190523142404.GA181@aaronlu>

On Thu, 23 May 2019 22:24:15 +0800 Aaron Lu <aaron.lu@linux.alibaba.com> wrote:

> From: Aaron Lu <ziqian.lzq@antfin.com>
> 
> swap_extent is used to map swap page offset to backing device's block
> offset. For a continuous block range, one swap_extent is used and all
> these swap_extents are managed in a linked list.
> 
> These swap_extents are used by map_swap_entry() during swap's read and
> write path. To find out the backing device's block offset for a page
> offset, the swap_extent list will be traversed linearly, with
> curr_swap_extent being used as a cache to speed up the search.
> 
> This works well as long as swap_extents are not huge or when the number
> of processes that access swap device are few, but when the swap device
> has many extents and there are a number of processes accessing the swap
> device concurrently, it can be a problem. On one of our servers, the
> disk's remaining size is tight:
> $df -h
> Filesystem      Size  Used Avail Use% Mounted on
> ... ...
> /dev/nvme0n1p1  1.8T  1.3T  504G  72% /home/t4
> 
> When creating a 80G swapfile there, there are as many as 84656 swap
> extents. The end result is, kernel spends abou 30% time in map_swap_entry()
> and swap throughput is only 70MB/s. As a comparison, when I used smaller
> sized swapfile, like 4G whose swap_extent dropped to 2000, swap throughput
> is back to 400-500MB/s and map_swap_entry() is about 3%.
> 
> One downside of using rbtree for swap_extent is, 'struct rbtree' takes
> 24 bytes while 'struct list_head' takes 16 bytes, that's 8 bytes more
> for each swap_extent. For a swapfile that has 80k swap_extents, that
> means 625KiB more memory consumed.
> 
> Test:
> 
> Since it's not possible to reboot that server, I can not test this patch
> diretly there. Instead, I tested it on another server with NVMe disk.
> 
> I created a 20G swapfile on an NVMe backed XFS fs. By default, the
> filesystem is quite clean and the created swapfile has only 2 extents.
> Testing vanilla and this patch shows no obvious performance difference
> when swapfile is not fragmented.
> 
> To see the patch's effects, I used some tweaks to manually fragment the
> swapfile by breaking the extent at 1M boundary. This made the swapfile
> have 20K extents.
> 
> nr_task=4
> kernel   swapout(KB/s) map_swap_entry(perf)  swapin(KB/s) map_swap_entry(perf)
> vanilla  165191           90.77%             171798         90.21%
> patched  858993 +420%      2.16%      	     715827 +317%    0.77%
> 
> nr_task=8
> kernel   swapout(KB/s) map_swap_entry(perf)  swapin(KB/s) map_swap_entry(perf)
> vanilla  306783           92.19%             318145	    87.76%
> patched  954437 +211%      2.35%            1073741 +237%    1.57%
> 
> swapout: the throughput of swap out, in KB/s, higher is better
> 1st map_swap_entry: cpu cycles percent sampled by perf
> swapin: the throughput of swap in, in KB/s, higher is better.
> 2nd map_swap_entry: cpu cycles percent sampled by perf
> 
> nr_task=1 doesn't show any difference, this is due to the
> curr_swap_extent can be effectively used to cache the correct swap
> extent for single task workload.

Seems sensible and the code looks straightforward.  Hopefully Hugh will
be able to cast a gimlet eye over it.

>  
> ...
>
> +static struct swap_extent *
> +offset_to_swap_extent(struct swap_info_struct *sis, unsigned long offset)
> +{
> +	struct swap_extent *se;
> +	struct rb_node *rb;
> +
> +	rb = sis->swap_extent_root.rb_node;
> +	while (rb) {
> +		se = rb_entry(rb, struct swap_extent, rb_node);
> +		if (offset < se->start_page)
> +			rb = rb->rb_left;
> +		else if (offset >= se->start_page + se->nr_pages)
> +			rb = rb->rb_right;
> +		else
> +			return se;
> +	}
> +	/* It *must* be present */
> +	BUG_ON(1);

I'm surprised this doesn't generate a warning about the function
failing to return a value.  I guess the compiler figured out that
BUG_ON(non-zero-constant) is equivalent to BUG(), which is noreturn.

Let's do this?

--- a/mm/swapfile.c~mm-swap-use-rbtree-for-swap_extent-fix
+++ a/mm/swapfile.c
@@ -218,7 +218,7 @@ offset_to_swap_extent(struct swap_info_s
 			return se;
 	}
 	/* It *must* be present */
-	BUG_ON(1);
+	BUG();
 }
 


  reply	other threads:[~2019-05-23 19:00 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-23 14:24 Aaron Lu
2019-05-23 19:00 ` Andrew Morton [this message]
2019-05-24  4:06   ` Aaron Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190523120035.efb7c3bf4c91e3aef255621c@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=aaron.lu@linux.alibaba.com \
    --cc=hughd@google.com \
    --cc=linux-mm@kvack.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox