linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Roman Gushchin <guro@fb.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Michal Hocko <mhocko@kernel.org>,
	Shakeel Butt <shakeelb@google.com>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	linux-kernel@vger.kernel.org, kernel-team@fb.com,
	Bharata B Rao <bharata@linux.ibm.com>,
	Yafang Shao <laoar.shao@gmail.com>
Subject: Re: [PATCH v2 15/28] mm: memcg/slab: obj_cgroup API
Date: Mon, 3 Feb 2020 14:31:58 -0500	[thread overview]
Message-ID: <20200203193158.GH1697@cmpxchg.org> (raw)
In-Reply-To: <20200127173453.2089565-16-guro@fb.com>

On Mon, Jan 27, 2020 at 09:34:40AM -0800, Roman Gushchin wrote:
> Obj_cgroup API provides an ability to account sub-page sized kernel
> objects, which potentially outlive the original memory cgroup.
> 
> The top-level API consists of the following functions:
>   bool obj_cgroup_tryget(struct obj_cgroup *objcg);
>   void obj_cgroup_get(struct obj_cgroup *objcg);
>   void obj_cgroup_put(struct obj_cgroup *objcg);
> 
>   int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size);
>   void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size);
> 
>   struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg);
> 
> Object cgroup is basically a pointer to a memory cgroup with a per-cpu
> reference counter. It substitutes a memory cgroup in places where
> it's necessary to charge a custom amount of bytes instead of pages.
> 
> All charged memory rounded down to pages is charged to the
> corresponding memory cgroup using __memcg_kmem_charge().
> 
> It implements reparenting: on memcg offlining it's getting reattached
> to the parent memory cgroup. Each online memory cgroup has an
> associated active object cgroup to handle new allocations and the list
> of all attached object cgroups. On offlining of a cgroup this list is
> reparented and for each object cgroup in the list the memcg pointer is
> swapped to the parent memory cgroup. It prevents long-living objects
> from pinning the original memory cgroup in the memory.
> 
> The implementation is based on byte-sized per-cpu stocks. A sub-page
> sized leftover is stored in an atomic field, which is a part of
> obj_cgroup object. So on cgroup offlining the leftover is automatically
> reparented.
> 
> memcg->objcg is rcu protected.
> objcg->memcg is a raw pointer, which is always pointing at a memory
> cgroup, but can be atomically swapped to the parent memory cgroup. So
> the caller must ensure the lifetime of the cgroup, e.g. grab
> rcu_read_lock or css_set_lock.
> 
> Signed-off-by: Roman Gushchin <guro@fb.com>

Suggested-by: Johannes Weiner <hannes@cmpxchg.org>

> @@ -194,6 +195,22 @@ struct memcg_cgwb_frn {
>  	struct wb_completion done;	/* tracks in-flight foreign writebacks */
>  };
>  
> +/*
> + * Bucket for arbitrarily byte-sized objects charged to a memory
> + * cgroup. The bucket can be reparented in one piece when the cgroup
> + * is destroyed, without having to round up the individual references
> + * of all live memory objects in the wild.
> + */
> +struct obj_cgroup {
> +	struct percpu_ref refcnt;
> +	struct mem_cgroup *memcg;
> +	atomic_t nr_charged_bytes;
> +	union {
> +		struct list_head list;
> +		struct rcu_head rcu;
> +	};
> +};
> +
>  /*
>   * The memory controller data structure. The memory controller controls both
>   * page cache and RSS per cgroup. We would eventually like to provide
> @@ -306,6 +323,8 @@ struct mem_cgroup {
>  	int kmemcg_id;
>  	enum memcg_kmem_state kmem_state;
>  	struct list_head kmem_caches;
> +	struct obj_cgroup __rcu *objcg;
> +	struct list_head objcg_list;

These could use a comment, IMO.

	/*
	 * Active object acounting bucket, as well as
	 * reparented buckets from dead children with
	 * outstanding objects.
	 */

or something like that.

> @@ -257,6 +257,73 @@ struct cgroup_subsys_state *vmpressure_to_css(struct vmpressure *vmpr)
>  }
>  
>  #ifdef CONFIG_MEMCG_KMEM
> +extern spinlock_t css_set_lock;
> +
> +static void obj_cgroup_release(struct percpu_ref *ref)
> +{
> +	struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
> +	unsigned int nr_bytes;
> +	unsigned int nr_pages;
> +	unsigned long flags;
> +
> +	nr_bytes = atomic_read(&objcg->nr_charged_bytes);
> +	WARN_ON_ONCE(nr_bytes & (PAGE_SIZE - 1));
> +	nr_pages = nr_bytes >> PAGE_SHIFT;
> +
> +	if (nr_pages) {
> +		rcu_read_lock();
> +		__memcg_kmem_uncharge(obj_cgroup_memcg(objcg), nr_pages);
> +		rcu_read_unlock();
> +	}
> +
> +	spin_lock_irqsave(&css_set_lock, flags);
> +	list_del(&objcg->list);
> +	mem_cgroup_put(obj_cgroup_memcg(objcg));
> +	spin_unlock_irqrestore(&css_set_lock, flags);

Heh, two obj_cgroup_memcg() lookups with different synchronization
rules.

I know that reparenting could happen in between the page uncharge and
the mem_cgroup_put(), and it would still be safe because the counters
are migrated atomically. But it seems needlessly lockless and complex.

Since you have to css_set_lock anyway, wouldn't it be better to do

	spin_lock_irqsave(&css_set_lock, flags);
	memcg = obj_cgroup_memcg(objcg);
	if (nr_pages)
		__memcg_kmem_uncharge(memcg, nr_pages);
	list_del(&objcg->list);
	mem_cgroup_put(memcg);
	spin_unlock_irqrestore(&css_set_lock, flags);

instead?

> +	percpu_ref_exit(ref);
> +	kfree_rcu(objcg, rcu);
> +}
> +
> +static struct obj_cgroup *obj_cgroup_alloc(void)
> +{
> +	struct obj_cgroup *objcg;
> +	int ret;
> +
> +	objcg = kzalloc(sizeof(struct obj_cgroup), GFP_KERNEL);
> +	if (!objcg)
> +		return NULL;
> +
> +	ret = percpu_ref_init(&objcg->refcnt, obj_cgroup_release, 0,
> +			      GFP_KERNEL);
> +	if (ret) {
> +		kfree(objcg);
> +		return NULL;
> +	}
> +	INIT_LIST_HEAD(&objcg->list);
> +	return objcg;
> +}
> +
> +static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
> +				  struct mem_cgroup *parent)
> +{
> +	struct obj_cgroup *objcg;
> +
> +	objcg = rcu_replace_pointer(memcg->objcg, NULL, true);

Can this actually race with new charges? By the time we are going
offline, where would they be coming from?

What happens if the charger sees a live memcg, but its memcg->objcg is
cleared? Shouldn't they have the same kind of lifetime, where as long
as the memcg can be charged, so can the objcg? What would happen if
you didn't clear memcg->objcg here?

> +	/* Paired with mem_cgroup_put() in objcg_release(). */
> +	css_get(&memcg->css);
> +	percpu_ref_kill(&objcg->refcnt);
> +
> +	spin_lock_irq(&css_set_lock);
> +	list_for_each_entry(objcg, &memcg->objcg_list, list) {
> +		css_get(&parent->css);
> +		xchg(&objcg->memcg, parent);
> +		css_put(&memcg->css);
> +	}

I'm having a pretty hard time following this refcounting.

Why does objcg only acquire a css reference on the way out? It should
hold one when objcg->memcg is set up, and put it when that pointer
goes away.

But also, objcg is already on its own memcg->objcg_list from the
start, so on the first reparenting we get a css ref, then move it to
the parent, then obj_cgroup_release() puts one it doesn't have ...?

Argh, help.

> @@ -2978,6 +3070,120 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
>  	if (PageKmemcg(page))
>  		__ClearPageKmemcg(page);
>  }
> +
> +static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes)
> +{
> +	struct memcg_stock_pcp *stock;
> +	unsigned long flags;
> +	bool ret = false;
> +
> +	local_irq_save(flags);
> +
> +	stock = this_cpu_ptr(&memcg_stock);
> +	if (objcg == stock->cached_objcg && stock->nr_bytes >= nr_bytes) {
> +		stock->nr_bytes -= nr_bytes;
> +		ret = true;
> +	}
> +
> +	local_irq_restore(flags);
> +
> +	return ret;
> +}
> +
> +static void drain_obj_stock(struct memcg_stock_pcp *stock)
> +{
> +	struct obj_cgroup *old = stock->cached_objcg;
> +
> +	if (!old)
> +		return;
> +
> +	if (stock->nr_bytes) {
> +		unsigned int nr_pages = stock->nr_bytes >> PAGE_SHIFT;
> +		unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1);
> +
> +		if (nr_pages) {
> +			rcu_read_lock();
> +			__memcg_kmem_uncharge(obj_cgroup_memcg(old), nr_pages);
> +			rcu_read_unlock();
> +		}
> +
> +		atomic_add(nr_bytes, &old->nr_charged_bytes);
> +		stock->nr_bytes = 0;
> +	}
> +
> +	obj_cgroup_put(old);
> +	stock->cached_objcg = NULL;
> +}
> +
> +static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
> +				     struct mem_cgroup *root_memcg)
> +{
> +	struct mem_cgroup *memcg;
> +
> +	if (stock->cached_objcg) {
> +		memcg = obj_cgroup_memcg(stock->cached_objcg);
> +		if (memcg && mem_cgroup_is_descendant(memcg, root_memcg))
> +			return true;
> +	}
> +
> +	return false;
> +}
> +
> +static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes)
> +{
> +	struct memcg_stock_pcp *stock;
> +	unsigned long flags;
> +
> +	local_irq_save(flags);
> +
> +	stock = this_cpu_ptr(&memcg_stock);
> +	if (stock->cached_objcg != objcg) { /* reset if necessary */
> +		drain_obj_stock(stock);
> +		obj_cgroup_get(objcg);
> +		stock->cached_objcg = objcg;
> +		stock->nr_bytes = atomic_xchg(&objcg->nr_charged_bytes, 0);
> +	}
> +	stock->nr_bytes += nr_bytes;
> +
> +	if (stock->nr_bytes > PAGE_SIZE)
> +		drain_obj_stock(stock);
> +
> +	local_irq_restore(flags);
> +}
> +
> +int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size)
> +{
> +	struct mem_cgroup *memcg;
> +	unsigned int nr_pages, nr_bytes;
> +	int ret;
> +
> +	if (consume_obj_stock(objcg, size))
> +		return 0;
> +
> +	rcu_read_lock();
> +	memcg = obj_cgroup_memcg(objcg);
> +	css_get(&memcg->css);
> +	rcu_read_unlock();

I don't quite understand the lifetime rules here. You're holding the
rcu lock, so the memcg object cannot get physically freed while you
are looking it up. But you could be racing with an offlining and see
the stale memcg pointer. Isn't css_get() unsafe? Doesn't this need a
retry loop around css_tryget() similar to get_mem_cgroup_from_mm()?

> +
> +	nr_pages = size >> PAGE_SHIFT;
> +	nr_bytes = size & (PAGE_SIZE - 1);
> +
> +	if (nr_bytes)
> +		nr_pages += 1;
> +
> +	ret = __memcg_kmem_charge(memcg, gfp, nr_pages);
> +	if (!ret && nr_bytes)
> +		refill_obj_stock(objcg, PAGE_SIZE - nr_bytes);
> +
> +	css_put(&memcg->css);
> +	return ret;
> +}
> +
> +void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
> +{
> +	refill_obj_stock(objcg, size);
> +}
> +
>  #endif /* CONFIG_MEMCG_KMEM */
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> @@ -3400,7 +3606,8 @@ static void memcg_flush_percpu_vmevents(struct mem_cgroup *memcg)
>  #ifdef CONFIG_MEMCG_KMEM
>  static int memcg_online_kmem(struct mem_cgroup *memcg)
>  {
> -	int memcg_id;
> +	struct obj_cgroup *objcg;
> +	int memcg_id, ret;
>  
>  	if (cgroup_memory_nokmem)
>  		return 0;
> @@ -3412,6 +3619,15 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
>  	if (memcg_id < 0)
>  		return memcg_id;
>  
> +	objcg = obj_cgroup_alloc();
> +	if (!objcg) {
> +		memcg_free_cache_id(memcg_id);
> +		return ret;
> +	}
> +	objcg->memcg = memcg;
> +	rcu_assign_pointer(memcg->objcg, objcg);
> +	list_add(&objcg->list, &memcg->objcg_list);

This self-hosting significantly adds to my confusion. It'd be a lot
easier to understand ownership rules and references if this list_add()
was done directly to the parent's list at the time of reparenting, not
here.

If the objcg holds a css reference, right here is where it should be
acquired. Then transferred in reparent and put during release.


  reply	other threads:[~2020-02-03 19:32 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-27 17:34 [PATCH v2 00/28] The new cgroup slab memory controller Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 01/28] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 02/28] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 03/28] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page() Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 04/28] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg() Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 05/28] mm: memcg/slab: cache page number in memcg_(un)charge_slab() Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 06/28] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge() Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 07/28] mm: memcg/slab: introduce mem_cgroup_from_obj() Roman Gushchin
2020-02-03 16:05   ` Johannes Weiner
2020-01-27 17:34 ` [PATCH v2 08/28] mm: fork: fix kernel_stack memcg stats for various stack implementations Roman Gushchin
2020-02-03 16:12   ` Johannes Weiner
2020-01-27 17:34 ` [PATCH v2 09/28] mm: memcg/slab: rename __mod_lruvec_slab_state() into __mod_lruvec_obj_state() Roman Gushchin
2020-02-03 16:13   ` Johannes Weiner
2020-01-27 17:34 ` [PATCH v2 10/28] mm: memcg: introduce mod_lruvec_memcg_state() Roman Gushchin
2020-02-03 17:39   ` Johannes Weiner
2020-01-27 17:34 ` [PATCH v2 11/28] mm: slub: implement SLUB version of obj_to_index() Roman Gushchin
2020-02-03 17:44   ` Johannes Weiner
2020-01-27 17:34 ` [PATCH v2 12/28] mm: vmstat: use s32 for vm_node_stat_diff in struct per_cpu_nodestat Roman Gushchin
2020-02-03 17:58   ` Johannes Weiner
2020-02-03 18:25     ` Roman Gushchin
2020-02-03 20:34       ` Johannes Weiner
2020-02-03 22:28         ` Roman Gushchin
2020-02-03 22:39           ` Johannes Weiner
2020-02-04  1:44             ` Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 13/28] mm: vmstat: convert slab vmstat counter to bytes Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 14/28] mm: memcontrol: decouple reference counting from page accounting Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 15/28] mm: memcg/slab: obj_cgroup API Roman Gushchin
2020-02-03 19:31   ` Johannes Weiner [this message]
2020-01-27 17:34 ` [PATCH v2 16/28] mm: memcg/slab: allocate obj_cgroups for non-root slab pages Roman Gushchin
2020-02-03 18:27   ` Johannes Weiner
2020-02-03 18:34     ` Roman Gushchin
2020-02-03 20:46       ` Johannes Weiner
2020-02-03 21:19         ` Roman Gushchin
2020-02-03 22:29           ` Johannes Weiner
2020-01-27 17:34 ` [PATCH v2 17/28] mm: memcg/slab: save obj_cgroup for non-root slab objects Roman Gushchin
2020-02-03 19:53   ` Johannes Weiner
2020-01-27 17:34 ` [PATCH v2 18/28] mm: memcg/slab: charge individual slab objects instead of pages Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 19/28] mm: memcg/slab: deprecate memory.kmem.slabinfo Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 20/28] mm: memcg/slab: move memcg_kmem_bypass() to memcontrol.h Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 21/28] mm: memcg/slab: use a single set of kmem_caches for all memory cgroups Roman Gushchin
2020-02-03 19:50   ` Johannes Weiner
2020-02-03 20:58     ` Roman Gushchin
2020-02-03 22:17       ` Johannes Weiner
2020-02-03 22:38         ` Roman Gushchin
2020-02-04  1:15         ` Roman Gushchin
2020-02-04  2:47           ` Johannes Weiner
2020-02-04  4:35             ` Roman Gushchin
2020-02-04 18:41               ` Johannes Weiner
2020-02-05 15:58                 ` Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 22/28] mm: memcg/slab: simplify memcg cache creation Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 23/28] mm: memcg/slab: deprecate memcg_kmem_get_cache() Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 24/28] mm: memcg/slab: deprecate slab_root_caches Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 25/28] mm: memcg/slab: remove redundant check in memcg_accumulate_slabinfo() Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 26/28] tools/cgroup: add slabinfo.py tool Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 27/28] tools/cgroup: make slabinfo.py compatible with new slab controller Roman Gushchin
2020-01-30  2:17   ` Bharata B Rao
2020-01-30  2:44     ` Roman Gushchin
2020-01-31 22:24     ` Roman Gushchin
2020-02-12  5:21       ` Bharata B Rao
2020-02-12 20:42         ` Roman Gushchin
2020-01-27 17:34 ` [PATCH v2 28/28] kselftests: cgroup: add kernel memory accounting tests Roman Gushchin
2020-01-30  2:06 ` [PATCH v2 00/28] The new cgroup slab memory controller Bharata B Rao
2020-01-30  2:41   ` Roman Gushchin
2020-08-12 23:16     ` Pavel Tatashin
2020-08-12 23:18       ` Pavel Tatashin
2020-08-13  0:04       ` Roman Gushchin
2020-08-13  0:31         ` Pavel Tatashin
2020-08-28 16:47           ` Pavel Tatashin
2020-09-01  5:28             ` Bharata B Rao
2020-09-01 12:52               ` Pavel Tatashin
2020-09-02  6:23                 ` Bharata B Rao
2020-09-02 12:34                   ` Pavel Tatashin
2020-09-02  9:53             ` Vlastimil Babka
2020-09-02 10:39               ` David Hildenbrand
2020-09-02 12:42                 ` Pavel Tatashin
2020-09-02 13:50                   ` Michal Hocko
2020-09-02 14:20                     ` Pavel Tatashin
2020-09-03 18:09                       ` David Hildenbrand
2020-09-02 11:26               ` Michal Hocko
2020-09-02 12:51                 ` Pavel Tatashin
2020-09-02 13:51                   ` Michal Hocko
2020-09-02 11:32               ` Michal Hocko
2020-09-02 12:53                 ` Pavel Tatashin
2020-09-02 13:52                   ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200203193158.GH1697@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=bharata@linux.ibm.com \
    --cc=guro@fb.com \
    --cc=kernel-team@fb.com \
    --cc=laoar.shao@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=shakeelb@google.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox