From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B0A5C433E1 for ; Thu, 27 Aug 2020 13:35:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3BAD922B40 for ; Thu, 27 Aug 2020 13:35:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3BAD922B40 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C3A8B8E001B; Thu, 27 Aug 2020 09:35:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BEB638E0006; Thu, 27 Aug 2020 09:35:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB3168E001B; Thu, 27 Aug 2020 09:35:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0148.hostedemail.com [216.40.44.148]) by kanga.kvack.org (Postfix) with ESMTP id 8E1108E0006 for ; Thu, 27 Aug 2020 09:35:28 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 362F112FF for ; Thu, 27 Aug 2020 13:35:28 +0000 (UTC) X-FDA: 77196445536.12.ghost27_1e023422706d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 52C2418018ACD for ; Thu, 27 Aug 2020 13:35:27 +0000 (UTC) X-HE-Tag: ghost27_1e023422706d X-Filterd-Recvd-Size: 3779 Received: from outbound-smtp18.blacknight.com (outbound-smtp18.blacknight.com [46.22.139.245]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Thu, 27 Aug 2020 13:35:26 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp18.blacknight.com (Postfix) with ESMTPS id 0644C1C4FD2 for ; Thu, 27 Aug 2020 14:35:25 +0100 (IST) Received: (qmail 15080 invoked from network); 27 Aug 2020 13:35:24 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.16.65]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 27 Aug 2020 13:35:24 -0000 Date: Thu, 27 Aug 2020 14:35:23 +0100 From: Mel Gorman To: Joonsoo Kim Cc: Vlastimil Babka , Andrew Morton , Linux Memory Management List , LKML , Michal Hocko , "Aneesh Kumar K . V" , kernel-team@lge.com, Joonsoo Kim Subject: Re: [PATCH for v5.9] mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore} APIs Message-ID: <20200827133523.GC3090@techsingularity.net> References: <1598331582-19923-1-git-send-email-iamjoonsoo.kim@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Queue-Id: 52C2418018ACD X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Aug 26, 2020 at 02:12:44PM +0900, Joonsoo Kim wrote: > > > And, it requires to break current code > > > layering that order-0 page is always handled by the pcplist. I'd prefer > > > to avoid it so this patch uses different way to skip CMA page allocation > > > from the pcplist. > > > > Well it would be much simpler and won't affect most of allocations. Better than > > flushing pcplists IMHO. > > Hmm...Still, I'd prefer my approach. I prefer the pcp bypass approach. It's simpler and it does not incur a pcp drain/refill penalty. > There are two reasons. First, > layering problem > mentioned above. In rmqueue(), there is a code for MIGRATE_HIGHATOMIC. > As the name shows, it's for high order atomic allocation. But, after > skipping pcplist > allocation as you suggested, we could get there with order 0 request. I guess your concern is that under some circumstances that a request that passes a watermark check could fail due to a highatomic reserve and to an extent this is true. However, in that case the system is already low on memory depending on the allocation context, the pcp lists may get flushed anyway. > We can also > change this code, but, I'd hope to maintain current layering. Second, > a performance > reason. After the flag for nocma is up, a burst of nocma allocation > could come. After > flushing the pcplist one times, we can use the free page on the > pcplist as usual until > the context is changed. It's not guaranteed because CMA pages could be freed between the nocma save and restore triggering further drains due to a reschedule. Similarly, a CMA allocation in parallel could refill with CMA pages on the per-cpu list. While both cases are unlikely, it's more unpredictable than a straight-forward pcp bypass. I don't really see it as a layering violation of the API because all order-0 pages go through the PCP lists. The fact that order-0 is serviced from the pcp list is an internal implementation detail, the API doesn't care. -- Mel Gorman SUSE Labs