From: Michal Hocko <mhocko@suse.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
cgroups@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [RFC] memcg: Convert mc_target.page to mc_target.folio
Date: Thu, 17 Mar 2022 12:58:46 +0100 [thread overview]
Message-ID: <YjMidqgZbieEyBuF@dhcp22.suse.cz> (raw)
In-Reply-To: <YjJJIrENYb1qFHzl@casper.infradead.org>
On Wed 16-03-22 20:31:30, Matthew Wilcox wrote:
> This is a fairly mechanical change to convert mc_target.page to
> mc_target.folio. This is a prerequisite for converting
> find_get_incore_page() to find_get_incore_folio(). But I'm not
> convinced it's right, and I'm not convinced the existing code is
> quite right either.
>
> In particular, the code in hunk @@ -6036,28 +6041,26 @@ needs
> careful review. There are also assumptions in here that a memory
> allocation is never larger than a PMD, which is true today, but I've
> been asked about larger allocations.
Could you be more specific about those usecases? Are they really
interested in supporting larger pages for the memcg migration which is
v1 only feature? Or you are interested merely to have the code more
generic?
[...]
> @@ -6036,28 +6041,26 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
> case MC_TARGET_DEVICE:
> device = true;
> fallthrough;
> - case MC_TARGET_PAGE:
> - page = target.page;
> + case MC_TARGET_FOLIO:
> + folio = target.folio;
> /*
> - * We can have a part of the split pmd here. Moving it
> - * can be done but it would be too convoluted so simply
> - * ignore such a partial THP and keep it in original
> - * memcg. There should be somebody mapping the head.
> + * Is bailing out here with a large folio still the
> + * right thing to do? Unclear.
> */
> - if (PageTransCompound(page))
> + if (folio_test_large(folio))
> goto put;
> - if (!device && isolate_lru_page(page))
> + if (!device && folio_isolate_lru(folio))
> goto put;
> - if (!mem_cgroup_move_account(page, false,
> + if (!mem_cgroup_move_account(folio, false,
> mc.from, mc.to)) {
> mc.precharge--;
> /* we uncharge from mc.from later. */
> mc.moved_charge++;
> }
> if (!device)
> - putback_lru_page(page);
> -put: /* get_mctgt_type() gets the page */
> - put_page(page);
> + folio_putback_lru(folio);
> +put: /* get_mctgt_type() gets the folio */
> + folio_put(folio);
> break;
> case MC_TARGET_SWAP:
> ent = target.ent;
It's been some time since I've looked at this particular code but my
recollection and current understanding is that we are skipping over pte
mapped huge pages for simplicity so that we do not have to recharge
all other ptes from the same huge page. What kind of concern do you see
there?
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2022-03-17 11:58 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-16 20:31 Matthew Wilcox
2022-03-17 11:58 ` Michal Hocko [this message]
2022-03-18 3:49 ` Matthew Wilcox
2022-03-18 9:12 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YjMidqgZbieEyBuF@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=vdavydov.dev@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox