From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2F4CC433F5 for ; Tue, 12 Oct 2021 18:24:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6A07E60E9C for ; Tue, 12 Oct 2021 18:24:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6A07E60E9C Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id EC6AF900003; Tue, 12 Oct 2021 14:24:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E763C900002; Tue, 12 Oct 2021 14:24:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D64AC900003; Tue, 12 Oct 2021 14:24:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id C2FCC900002 for ; Tue, 12 Oct 2021 14:24:22 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 79A7F32099 for ; Tue, 12 Oct 2021 18:24:22 +0000 (UTC) X-FDA: 78688610364.23.1C9C0AF Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf22.hostedemail.com (Postfix) with ESMTP id 259B21903 for ; Tue, 12 Oct 2021 18:24:21 +0000 (UTC) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 92F9221BA8; Tue, 12 Oct 2021 18:24:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1634063060; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=fjxo7ii/mbPa3USk5V6FhOCT1VqaVSL52gmPrnpbKlI=; b=FozIsPYNCR9P2yDMD66GyDAPw3qM4v6sJUFevXej8FqVv24e7RMuQ6uoa1qRS6SSvH8GLs GrHa1jSGOxGWz2Axwy1lLlzSQdnpY6d2aouFDEJYsMbewpxqyGbXM5OvKpbwBpZOIKDGqY eq2RfV1MskQccy/QruqLGoWdo/HTjXE= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id F4036A3B83; Tue, 12 Oct 2021 18:24:19 +0000 (UTC) Date: Tue, 12 Oct 2021 20:24:19 +0200 From: Michal Hocko To: Shakeel Butt Cc: Vasily Averin , Johannes Weiner , Vladimir Davydov , Andrew Morton , Mel Gorman , Roman Gushchin , Uladzislau Rezki , Vlastimil Babka , Cgroups , Linux MM , LKML , kernel@openvz.org Subject: Re: [PATCH mm v3] memcg: enable memory accounting in __alloc_pages_bulk Message-ID: References: <0baa2b26-a41b-acab-b75d-72ec241f5151@virtuozzo.com> <60df0efd-f458-a13c-7c89-749bdab21d1d@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 259B21903 X-Stat-Signature: krnjffqrehz6g6aqnztt8hyktz5b9q1p Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=FozIsPYN; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf22.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.28 as permitted sender) smtp.mailfrom=mhocko@suse.com X-HE-Tag: 1634063061-214592 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 12-10-21 09:08:38, Shakeel Butt wrote: > On Tue, Oct 12, 2021 at 8:36 AM Michal Hocko wrote: > > > > On Tue 12-10-21 17:58:21, Vasily Averin wrote: > > > Enable memory accounting for bulk page allocator. > > > > ENOCHANGELOG > > > > And I have to say I am not very happy about the solution. It adds a very > > tricky code where it splits different charging steps apart. > > > > Would it be just too inefficient to charge page-by-page once all pages > > are already taken away from the pcp lists? This bulk should be small so > > this shouldn't really cause massive problems. I mean something like > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index b37435c274cf..8bcd69195ef5 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -5308,6 +5308,10 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, > > > > local_unlock_irqrestore(&pagesets.lock, flags); > > > > + if (memcg_kmem_enabled() && (gfp & __GFP_ACCOUNT)) { > > + /* charge pages here */ > > + } > > It is not that simple because __alloc_pages_bulk only allocate pages > for empty slots in the page_array provided by the caller. > > The failure handling for post charging would be more complicated. If this is really that complicated (I haven't tried) then it would be much more simple to completely skip the bulk allocator for __GFP_ACCOUNT rather than add a tricky code. The bulk allocator is meant to be used for ultra hot paths and memcg charging along with the reclaim doesn't really fit into that model anyway. Or are there any actual users who really need bulk allocator optimization and also need memcg accounting? -- Michal Hocko SUSE Labs