linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] zsmalloc: consider ZS_ALMOST_FULL as migrate source
@ 2015-07-10  1:32 Minchan Kim
  2015-07-10  1:58 ` Sergey Senozhatsky
  2015-07-10  2:06 ` Sergey Senozhatsky
  0 siblings, 2 replies; 8+ messages in thread
From: Minchan Kim @ 2015-07-10  1:32 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Sergey Senozhatsky, Nitin Gupta, linux-kernel, linux-mm, Minchan Kim

There is no reason to prevent select ZS_ALMOST_FULL as migration
source if we cannot find source from ZS_ALMOST_EMPTY.

With this patch, zs_can_compact will return more exact result.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c |   19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 8c78bcb..7bd7dde 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1687,12 +1687,20 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
 static struct page *isolate_source_page(struct size_class *class)
 {
 	struct page *page;
+	int i;
+	bool found = false;
 
-	page = class->fullness_list[ZS_ALMOST_EMPTY];
-	if (page)
-		remove_zspage(page, class, ZS_ALMOST_EMPTY);
+	for (i = ZS_ALMOST_EMPTY; i >= ZS_ALMOST_FULL; i--) {
+		page = class->fullness_list[i];
+		if (!page)
+			continue;
 
-	return page;
+		remove_zspage(page, class, i);
+		found = true;
+		break;
+	}
+
+	return found ? page : NULL;
 }
 
 /*
@@ -1706,9 +1714,6 @@ static unsigned long zs_can_compact(struct size_class *class)
 {
 	unsigned long obj_wasted;
 
-	if (!zs_stat_get(class, CLASS_ALMOST_EMPTY))
-		return 0;
-
 	obj_wasted = zs_stat_get(class, OBJ_ALLOCATED) -
 		zs_stat_get(class, OBJ_USED);
 
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] zsmalloc: consider ZS_ALMOST_FULL as migrate source
  2015-07-10  1:32 [PATCH] zsmalloc: consider ZS_ALMOST_FULL as migrate source Minchan Kim
@ 2015-07-10  1:58 ` Sergey Senozhatsky
  2015-07-10  2:29   ` Minchan Kim
  2015-07-10  2:06 ` Sergey Senozhatsky
  1 sibling, 1 reply; 8+ messages in thread
From: Sergey Senozhatsky @ 2015-07-10  1:58 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Sergey Senozhatsky, Nitin Gupta, linux-kernel, linux-mm

On (07/10/15 10:32), Minchan Kim wrote:
> There is no reason to prevent select ZS_ALMOST_FULL as migration
> source if we cannot find source from ZS_ALMOST_EMPTY.
> 
> With this patch, zs_can_compact will return more exact result.
> 

wouldn't that be too aggresive?

drainig 'only ZS_ALMOST_EMPTY classes' sounds safer than draining
'ZS_ALMOST_EMPTY and ZS_ALMOST_FULL clases'. you seemed to be worried
that compaction can leave no unused objects in classes, which will
result in zspage allocation happening right after compaction. it looks
like here the chances to cause zspage allocation are even higher. don't
you think so?

> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  mm/zsmalloc.c |   19 ++++++++++++-------
>  1 file changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 8c78bcb..7bd7dde 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1687,12 +1687,20 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
>  static struct page *isolate_source_page(struct size_class *class)
>  {
>  	struct page *page;
> +	int i;
> +	bool found = false;
>  
> -	page = class->fullness_list[ZS_ALMOST_EMPTY];
> -	if (page)
> -		remove_zspage(page, class, ZS_ALMOST_EMPTY);
> +	for (i = ZS_ALMOST_EMPTY; i >= ZS_ALMOST_FULL; i--) {
> +		page = class->fullness_list[i];
> +		if (!page)
> +			continue;
>  
> -	return page;
> +		remove_zspage(page, class, i);
> +		found = true;
> +		break;
> +	}
> +
> +	return found ? page : NULL;
>  }
>  
>  /*
> @@ -1706,9 +1714,6 @@ static unsigned long zs_can_compact(struct size_class *class)
>  {
>  	unsigned long obj_wasted;
>  
> -	if (!zs_stat_get(class, CLASS_ALMOST_EMPTY))
> -		return 0;
> -

well, you asked to add this check like a week or two ago (it's not even
in -next yet) and now you remove it.

>  	obj_wasted = zs_stat_get(class, OBJ_ALLOCATED) -
>  		zs_stat_get(class, OBJ_USED);
>  

	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] zsmalloc: consider ZS_ALMOST_FULL as migrate source
  2015-07-10  1:32 [PATCH] zsmalloc: consider ZS_ALMOST_FULL as migrate source Minchan Kim
  2015-07-10  1:58 ` Sergey Senozhatsky
@ 2015-07-10  2:06 ` Sergey Senozhatsky
  2015-07-10  2:34   ` Minchan Kim
  1 sibling, 1 reply; 8+ messages in thread
From: Sergey Senozhatsky @ 2015-07-10  2:06 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Sergey Senozhatsky, Nitin Gupta, linux-kernel, linux-mm

On (07/10/15 10:32), Minchan Kim wrote:
>  static struct page *isolate_source_page(struct size_class *class)
>  {
>  	struct page *page;
> +	int i;
> +	bool found = false;
>  

why use 'bool found'? just return `page', which will be either NULL
or !NULL?

	-ss

> -	page = class->fullness_list[ZS_ALMOST_EMPTY];
> -	if (page)
> -		remove_zspage(page, class, ZS_ALMOST_EMPTY);
> +	for (i = ZS_ALMOST_EMPTY; i >= ZS_ALMOST_FULL; i--) {
> +		page = class->fullness_list[i];
> +		if (!page)
> +			continue;
>  
> -	return page;
> +		remove_zspage(page, class, i);
> +		found = true;
> +		break;
> +	}
> +
> +	return found ? page : NULL;
>  }

	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] zsmalloc: consider ZS_ALMOST_FULL as migrate source
  2015-07-10  1:58 ` Sergey Senozhatsky
@ 2015-07-10  2:29   ` Minchan Kim
  2015-07-10  4:19     ` Sergey Senozhatsky
  0 siblings, 1 reply; 8+ messages in thread
From: Minchan Kim @ 2015-07-10  2:29 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, Sergey Senozhatsky, Nitin Gupta, linux-kernel, linux-mm

Hi Sergey,

On Fri, Jul 10, 2015 at 10:58:28AM +0900, Sergey Senozhatsky wrote:
> On (07/10/15 10:32), Minchan Kim wrote:
> > There is no reason to prevent select ZS_ALMOST_FULL as migration
> > source if we cannot find source from ZS_ALMOST_EMPTY.
> > 
> > With this patch, zs_can_compact will return more exact result.
> > 
> 
> wouldn't that be too aggresive?
> 
> drainig 'only ZS_ALMOST_EMPTY classes' sounds safer than draining
> 'ZS_ALMOST_EMPTY and ZS_ALMOST_FULL clases'. you seemed to be worried
> that compaction can leave no unused objects in classes, which will
> result in zspage allocation happening right after compaction. it looks
> like here the chances to cause zspage allocation are even higher. don't
> you think so?

Good question.

My worry was failure of order-0 page allocation in zram-swap path
when memory presssure is really heavy but I didn't insist to you
from sometime. The reason I changed my mind was

1. It's almost dead system if there is no order-0 page
2. If old might be working well, it's not our design, just luck.


> 
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  mm/zsmalloc.c |   19 ++++++++++++-------
> >  1 file changed, 12 insertions(+), 7 deletions(-)
> > 
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 8c78bcb..7bd7dde 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1687,12 +1687,20 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
> >  static struct page *isolate_source_page(struct size_class *class)
> >  {
> >  	struct page *page;
> > +	int i;
> > +	bool found = false;
> >  
> > -	page = class->fullness_list[ZS_ALMOST_EMPTY];
> > -	if (page)
> > -		remove_zspage(page, class, ZS_ALMOST_EMPTY);
> > +	for (i = ZS_ALMOST_EMPTY; i >= ZS_ALMOST_FULL; i--) {
> > +		page = class->fullness_list[i];
> > +		if (!page)
> > +			continue;
> >  
> > -	return page;
> > +		remove_zspage(page, class, i);
> > +		found = true;
> > +		break;
> > +	}
> > +
> > +	return found ? page : NULL;
> >  }
> >  
> >  /*
> > @@ -1706,9 +1714,6 @@ static unsigned long zs_can_compact(struct size_class *class)
> >  {
> >  	unsigned long obj_wasted;
> >  
> > -	if (!zs_stat_get(class, CLASS_ALMOST_EMPTY))
> > -		return 0;
> > -
> 
> well, you asked to add this check like a week or two ago (it's not even
> in -next yet) and now you remove it.

The reason I wanted to check CLASS_ALMOST_EMPTY is to make zs_can_compact exact.
But with this patch, we can achieve it without above.

> 
> >  	obj_wasted = zs_stat_get(class, OBJ_ALLOCATED) -
> >  		zs_stat_get(class, OBJ_USED);
> >  
> 
> 	-ss

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] zsmalloc: consider ZS_ALMOST_FULL as migrate source
  2015-07-10  2:06 ` Sergey Senozhatsky
@ 2015-07-10  2:34   ` Minchan Kim
  0 siblings, 0 replies; 8+ messages in thread
From: Minchan Kim @ 2015-07-10  2:34 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, Sergey Senozhatsky, Nitin Gupta, linux-kernel, linux-mm

On Fri, Jul 10, 2015 at 11:06:24AM +0900, Sergey Senozhatsky wrote:
> On (07/10/15 10:32), Minchan Kim wrote:
> >  static struct page *isolate_source_page(struct size_class *class)
> >  {
> >  	struct page *page;
> > +	int i;
> > +	bool found = false;
> >  
> 
> why use 'bool found'? just return `page', which will be either NULL
> or !NULL?

It seems my old version which had a bug during test. :(
I will resend with the fix.

Thanks, Sergey!

> 
> 	-ss
> 
> > -	page = class->fullness_list[ZS_ALMOST_EMPTY];
> > -	if (page)
> > -		remove_zspage(page, class, ZS_ALMOST_EMPTY);
> > +	for (i = ZS_ALMOST_EMPTY; i >= ZS_ALMOST_FULL; i--) {
> > +		page = class->fullness_list[i];
> > +		if (!page)
> > +			continue;
> >  
> > -	return page;
> > +		remove_zspage(page, class, i);
> > +		found = true;
> > +		break;
> > +	}
> > +
> > +	return found ? page : NULL;
> >  }
> 
> 	-ss

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] zsmalloc: consider ZS_ALMOST_FULL as migrate source
  2015-07-10  2:29   ` Minchan Kim
@ 2015-07-10  4:19     ` Sergey Senozhatsky
  2015-07-10  5:21       ` Minchan Kim
  0 siblings, 1 reply; 8+ messages in thread
From: Sergey Senozhatsky @ 2015-07-10  4:19 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, Andrew Morton, Sergey Senozhatsky,
	Nitin Gupta, linux-kernel, linux-mm

On (07/10/15 11:29), Minchan Kim wrote:
> Good question.
> 
> My worry was failure of order-0 page allocation in zram-swap path
> when memory presssure is really heavy but I didn't insist to you
> from sometime. The reason I changed my mind was
> 
> 1. It's almost dead system if there is no order-0 page
> 2. If old might be working well, it's not our design, just luck.

I mean I find your argument that some level of fragmentation
can be of use to be valid, to some degree.


hm... by the way,

unsigned long zs_malloc(struct zs_pool *pool, size_t size)
{
...
   size += ZS_HANDLE_SIZE;
   class = pool->size_class[get_size_class_index(size)];
...
   if (!first_page) {
	   spin_unlock(&class->lock);
	   first_page = alloc_zspage(class, pool->flags);
	   if (unlikely(!first_page)) {
		   free_handle(pool, handle);
		   return 0;
	   }
   ...

I'm thinking now, does it make sense to try harder here? if we
failed to alloc_zspage(), then may be we can try any of unused
objects from a 'upper' (larger/next) class?  there might be a
plenty of them.

	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] zsmalloc: consider ZS_ALMOST_FULL as migrate source
  2015-07-10  4:19     ` Sergey Senozhatsky
@ 2015-07-10  5:21       ` Minchan Kim
  2015-07-10  5:34         ` Sergey Senozhatsky
  0 siblings, 1 reply; 8+ messages in thread
From: Minchan Kim @ 2015-07-10  5:21 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, Sergey Senozhatsky, Nitin Gupta, linux-kernel, linux-mm

On Fri, Jul 10, 2015 at 01:19:29PM +0900, Sergey Senozhatsky wrote:
> On (07/10/15 11:29), Minchan Kim wrote:
> > Good question.
> > 
> > My worry was failure of order-0 page allocation in zram-swap path
> > when memory presssure is really heavy but I didn't insist to you
> > from sometime. The reason I changed my mind was
> > 
> > 1. It's almost dead system if there is no order-0 page
> > 2. If old might be working well, it's not our design, just luck.
> 
> I mean I find your argument that some level of fragmentation
> can be of use to be valid, to some degree.

The benefit I had in mind was to prevent failure of allocation.

> 
> 
> hm... by the way,
> 
> unsigned long zs_malloc(struct zs_pool *pool, size_t size)
> {
> ...
>    size += ZS_HANDLE_SIZE;
>    class = pool->size_class[get_size_class_index(size)];
> ...
>    if (!first_page) {
> 	   spin_unlock(&class->lock);
> 	   first_page = alloc_zspage(class, pool->flags);
> 	   if (unlikely(!first_page)) {
> 		   free_handle(pool, handle);
> 		   return 0;
> 	   }
>    ...
> 
> I'm thinking now, does it make sense to try harder here? if we
> failed to alloc_zspage(), then may be we can try any of unused
> objects from a 'upper' (larger/next) class?  there might be a
> plenty of them.

I actually thought about that but I didn't have any report from
community and product division of my compamy until now.
But with auto-compaction, the chance would be higher than old
so let's keep an eye on it(I think users can find it easily because
swap layer emits "write write failure").

If it happens(ie, any report from someone), we could try to compact
and then if it fails, we could fall back to upper class as a last
resort.

Thanks.
> 
> 	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] zsmalloc: consider ZS_ALMOST_FULL as migrate source
  2015-07-10  5:21       ` Minchan Kim
@ 2015-07-10  5:34         ` Sergey Senozhatsky
  0 siblings, 0 replies; 8+ messages in thread
From: Sergey Senozhatsky @ 2015-07-10  5:34 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, Andrew Morton, Sergey Senozhatsky,
	Nitin Gupta, linux-kernel, linux-mm

On (07/10/15 14:21), Minchan Kim wrote:
> > I mean I find your argument that some level of fragmentation
> > can be of use to be valid, to some degree.
> 
> The benefit I had in mind was to prevent failure of allocation.
> 

Sure. I tested the patch.

cat /sys/block/zram0/mm_stat
3122102272 2882639758 2890366976        0 2969432064       55    79294

cat /sys/block/zram0/stat
    7212        0    57696       73  7513254        0 60106032    52096     0    52106    52113

Compaction stats:

[14637.002961] compaction nr:89 (full:528 part:3027)  ~= 0.148

Nothing `alarming'.


> > I'm thinking now, does it make sense to try harder here? if we
> > failed to alloc_zspage(), then may be we can try any of unused
> > objects from a 'upper' (larger/next) class?  there might be a
> > plenty of them.
> 
> I actually thought about that but I didn't have any report from
> community and product division of my compamy until now.
> But with auto-compaction, the chance would be higher than old
> so let's keep an eye on it(I think users can find it easily because
> swap layer emits "write write failure").
> 
> If it happens(ie, any report from someone), we could try to compact
> and then if it fails, we could fall back to upper class as a last
> resort.
> 

OK.

	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-07-10  5:34 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-10  1:32 [PATCH] zsmalloc: consider ZS_ALMOST_FULL as migrate source Minchan Kim
2015-07-10  1:58 ` Sergey Senozhatsky
2015-07-10  2:29   ` Minchan Kim
2015-07-10  4:19     ` Sergey Senozhatsky
2015-07-10  5:21       ` Minchan Kim
2015-07-10  5:34         ` Sergey Senozhatsky
2015-07-10  2:06 ` Sergey Senozhatsky
2015-07-10  2:34   ` Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox