linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: LKML <linux-kernel@vger.kernel.org>,
	"balbir@linux.vnet.ibm.com" <balbir@linux.vnet.ibm.com>,
	"yamamoto@valinux.co.jp" <yamamoto@valinux.co.jp>,
	"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>,
	ryov@valinux.co.jp, "linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [RFC][PATCH -mm 0/7] memcg: lockless page_cgroup v1
Date: Thu, 21 Aug 2008 17:34:42 +0900	[thread overview]
Message-ID: <20080821173442.b9234f26.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20080820200006.a152c14c.kamezawa.hiroyu@jp.fujitsu.com>

On Wed, 20 Aug 2008 20:00:06 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> > Known problem: force_emtpy is broken...so rmdir will struck into nightmare.
> > It's because of patch 2/7.
> > will be fixed in the next version.
> > 
> 
This is a new routine for force_empty. Assumes init_mem_cgroup has no limit.
(lockless page_cgroup is also applied.)

I think this routine is enough generic to be enhanced for hierarchy in future.
I think move_account() routine can be used for other purpose.
(for example, move_task.)


==
int mem_cgroup_move_account(struct page *page, struct page_cgroup *pc,
        struct mem_cgroup *from, struct mem_cgroup *to)
{
        struct mem_cgroup_per_zone *from_mz, *to_mz;
        int nid, zid;
        int ret = 1;

        VM_BUG_ON(to->no_limit == 0);
        VM_BUG_ON(!irqs_disabled());

        nid = page_to_nid(page);
        zid = page_zonenum(page);
        from_mz =  mem_cgroup_zoneinfo(from, nid, zid);
        to_mz =  mem_cgroup_zoneinfo(to, nid, zid);

        if (res_counter_charge(&to->res, PAGE_SIZE)) {
                /* Now, we assume no_limit...no failure here. */
                return ret;
        }

        if (spin_trylock(&to_mz->lru_lock)) {
                __mem_cgroup_remove_list(from_mz, pc);
                css_put(&from->css);
                res_counter_uncharge(&from->res, PAGE_SIZE);
                pc->mem_cgroup = to;
                css_get(&to->css);
                __mem_cgroup_add_list(to_mz, pc);
                ret = 0;
                spin_unlock(&to_mz->lru_lock);
        } else {
                res_counter_uncharge(&to->res, PAGE_SIZE);
        }

        return ret;
}
/*
 * This routine moves all account to root cgroup.
 */
static void mem_cgroup_force_empty_list(struct mem_cgroup *mem,
                            struct mem_cgroup_per_zone *mz,
                            enum lru_list lru)
{
        struct page_cgroup *pc;
        unsigned long flags;
        struct list_head *list;
        int drain = 0;

        list = &mz->lists[lru];

        spin_lock_irqsave(&mz->lru_lock, flags);
        while (!list_empty(list)) {
                pc = list_entry(list->prev, struct page_cgroup, lru);
                if (PcgObsolete(pc)) {
                        list_move(&pc->lru, list);
                        /* This page_cgroup may remain on this list until
                           we drain it. */
                        if (drain++ > MEMCG_LRU_THRESH/2) {
                                spin_unlock_irqrestore(&mz->lru_lock, flags);
                                mem_cgroup_all_force_drain();
                                yield();
                                drain = 0;
                                spin_lock_irqsave(&mz->lru_lock, flags);
                        }
                        continue;
                }
                if (mem_cgroup_move_account(page, pc->page,
                                                mem, &init_mem_cgroup)) {
                        /* some confliction */
                        list_move(&pc->lru, list);
                        spin_unlock_irqrestore(&mz->lru_lock, flags);
                        yield();
                        spin_lock_irqsave(&mz->lru_lock, flags);
                }
                if (atomic_read(&mem->css.cgroup->count) > 0)
                        break;
        }
        spin_unlock_irqrestore(&mz->lru_lock, flags);
}
==

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2008-08-21  8:34 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20080819173014.17358c17.kamezawa.hiroyu@jp.fujitsu.com>
2008-08-20  9:53 ` KAMEZAWA Hiroyuki
2008-08-20  9:55   ` [RFC][PATCH -mm 1/7] memcg: page_cgroup_atomic_flags.patch KAMEZAWA Hiroyuki
2008-08-20  9:59   ` [RFC][PATCH -mm 2/7] memcg: delayed_batch_freeing_of_page_cgroup.patch KAMEZAWA Hiroyuki
2008-08-20 10:03   ` [RFC][PATCH -mm 3/7] memcg: freeing page_cgroup by rcu.patch KAMEZAWA Hiroyuki
2008-08-20 10:04   ` [RFC][PATCH -mm 4/7] memcg: lockless page_cgroup KAMEZAWA Hiroyuki
2008-08-20 10:05   ` [RFC][PATCH -mm 5/7] memcg: prefetch mem cgroup per zone KAMEZAWA Hiroyuki
2008-08-20 10:07   ` [RFC][PATCH -mm 6/7] memcg: make-mapping-null-before-calling-uncharge.patch KAMEZAWA Hiroyuki
2008-08-22  4:57     ` Daisuke Nishimura
2008-08-22  5:48       ` KAMEZAWA Hiroyuki
2008-08-20 10:08   ` [RFC][PATCH -mm 7/7] memcg: add page_cgroup.h header file KAMEZAWA Hiroyuki
2008-08-20 10:41   ` [RFC][PATCH -mm 0/7] memcg: lockless page_cgroup v1 KAMEZAWA Hiroyuki
2008-08-20 11:00     ` KAMEZAWA Hiroyuki
2008-08-21  2:17       ` KAMEZAWA Hiroyuki
2008-08-21  3:36         ` Balbir Singh
2008-08-21  3:58           ` KAMEZAWA Hiroyuki
2008-08-21  3:54         ` Daisuke Nishimura
2008-08-21  8:34       ` KAMEZAWA Hiroyuki [this message]
2008-08-20 11:33   ` Hirokazu Takahashi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080821173442.b9234f26.kamezawa.hiroyu@jp.fujitsu.com \
    --to=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nishimura@mxp.nes.nec.co.jp \
    --cc=ryov@valinux.co.jp \
    --cc=yamamoto@valinux.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox