From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3834DCDB465 for ; Tue, 17 Oct 2023 02:33:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 568AA8D00E3; Mon, 16 Oct 2023 22:33:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 518B48D00DE; Mon, 16 Oct 2023 22:33:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E07C8D00E3; Mon, 16 Oct 2023 22:33:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2F08A8D00DE for ; Mon, 16 Oct 2023 22:33:16 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 00E3F160D39 for ; Tue, 17 Oct 2023 02:33:15 +0000 (UTC) X-FDA: 81353381550.14.DB3A792 Received: from mail-lj1-f179.google.com (mail-lj1-f179.google.com [209.85.208.179]) by imf20.hostedemail.com (Postfix) with ESMTP id 398CA1C000C for ; Tue, 17 Oct 2023 02:33:12 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=GzdlYuIB; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of huangzhaoyang@gmail.com designates 209.85.208.179 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697509993; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8TcMqLbn0zF1KX0fdP3lsQLyg3y0OI+0Vy/wkpY30DA=; b=br7N3St/7AqTzVGTUZdj1GWPRERloBZkLqCt+8LFpg2wE6FKfLD5GdG+8nLU5iyPM1bO9q urZ2xgvOGE2ipDkG/oKTjF8NIHKkg/2028aOWbjA4DrhEMJ1VJtd1URso+QfhJq+pHaoHw 49b+vLr4zXXvksGav4b4AWCiqpFbamY= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=GzdlYuIB; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of huangzhaoyang@gmail.com designates 209.85.208.179 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697509993; a=rsa-sha256; cv=none; b=Fo+lc/gUB9iI/URJQD1wzp9c7kLhJyflKZUHM51Zbjt+YMIIYhnfqckcyRUDTIHY4G8K1G A/AZSp3f3Gt25yIx2EfMgAtabM+34p77K8zq6NInXE9HvG3GwBZdHFcQ3O0qQGwHa2icKO AXQsO5QxmFCvePgHfvbLHPQ4Lb0ndgg= Received: by mail-lj1-f179.google.com with SMTP id 38308e7fff4ca-2c515527310so32250671fa.2 for ; Mon, 16 Oct 2023 19:33:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697509991; x=1698114791; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=8TcMqLbn0zF1KX0fdP3lsQLyg3y0OI+0Vy/wkpY30DA=; b=GzdlYuIB1SgRNln6B7A1To6YQuix6rj32cnpQAftQyqGU788mEAa6gy4w6gVeoJInz DHNF5CEry21JwQh7H5xC9HUpkwLKIcyUygaykscuqPGcfx55NNiDGq29rsZ2xHqlkhmy MBiUoMv5LX383ku1JO7GZsBYfTmCazcNM/QHYFDWdDfEWdp24mvSZ4aFSNuj8zVHLqpZ ecOAeUdyvA8hoUVtfxm2m2ck2yBNf9lZpAj4VtmlXu8nD6twDK/gLg+q0SIUHomfJEGu mEOYM/XsrQLA0m/GvTpTqNIyNCnUAnu3sKghk+8BAjTMkqea7uI9WrTasmffSPPM1l2S KR0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697509991; x=1698114791; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8TcMqLbn0zF1KX0fdP3lsQLyg3y0OI+0Vy/wkpY30DA=; b=eMLjOXdPvYUsCTDBUAFDQzMb/0CniQ34yJjRzQig5ubS9+XLDnGNC940xebn03jSl5 qBVHoGrrpaAS1TrDu1g2kQ0gKpjdGi10LFj/ugB9c8uCW6erpJgyxzHWPT4QvFjov4tB /R6/XBQK9Lp9MBexUX7OlkEpdP4fTQkSrTH1oRD6lG0kfkL84fAWHeGFEhUbYN1EbXPH ANy+f7SCq0U9JRI4AleRCUTxCOa/PNaxwhQsW3t1acqWItoodXkmh3jw/VM7+hGRfcwm V4Q7NHzSKSIuvLhucEODA0/KI4eKpPZrccaSgsRSAmClAGQGjulNPFuOXMDRuWCl+MU3 oAQw== X-Gm-Message-State: AOJu0YxCSHa5Kw7hnTImzz1D3E6C3uGjPC2J2tUH1XTlza40U14VYc2/ G6Vrkzek4r64wiMyaw/6ChuXQL1OVoP4/4V73PE= X-Google-Smtp-Source: AGHT+IHhlgnJLOyHTROkSaC0DhM2EYlgRA/3QBMytWDYt/jWUd7m1cjmnnqt7sotnk6DtxqeMrg7MbplD6+EIUfBBFg= X-Received: by 2002:a2e:88ce:0:b0:2bc:f39b:d1a8 with SMTP id a14-20020a2e88ce000000b002bcf39bd1a8mr662452ljk.46.1697509991069; Mon, 16 Oct 2023 19:33:11 -0700 (PDT) MIME-Version: 1.0 References: <20231016071245.2865233-1-zhaoyang.huang@unisoc.com> <20231016153959.c218e1ae876426b9193eb294@linux-foundation.org> In-Reply-To: <20231016153959.c218e1ae876426b9193eb294@linux-foundation.org> From: Zhaoyang Huang Date: Tue, 17 Oct 2023 10:32:59 +0800 Message-ID: Subject: Re: [PATCHv6 1/1] mm: optimization on page allocation when CMA enabled To: Andrew Morton Cc: "zhaoyang.huang" , Johannes Weiner , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, steve.kang@unisoc.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 398CA1C000C X-Stat-Signature: 1gop3j1pqjqgj315komp15k8mbgi1h4u X-Rspam-User: X-HE-Tag: 1697509992-755023 X-HE-Meta: U2FsdGVkX19qkBL1umDJZc5K/1If/f7KpppO1yI+xOVo7Kgatvb6bfPjDGTgv2wBl/j+UBmrxE0GEwwYbxiK8U+V1ikCssYe7Xn67i96qhvaknuX1pxrWy5Jgb2sNDI599YmKV4wssr2mA3ftOU+KYQPSQrShKtyfbMSUSrGHypAoVsk4+GK7oEb9656zLcUReeYFYca1s+yLqY5nRHR01rVmpAzj0vRy+m/IwAPXK5GP+5ZXt6njnvfCSdIc4noqnhxy7hqdcpKKHeg+LLkLb9IhtE2ggA6kM7CVF7WhGh/FbrDnEnb86ai8nE7ywHyzzuyoQ4CHCgAV7FKoxi8wxSskwTLa/jS5aurOmQiryEKVx04HZ65jjxG+dsyqkhluzB9k4FOYTdkTxNo137wDNEagpBnY5YylGJgHPPOnKPID99IpSu4gAe9g8nteBJ42i2rFSzkbFm0NjnHJrLJR5uv4ZuJFKJucXrkjOn7ucdvgZOYbl9qi8cEJPl/WS1zEiEt7mNoGAqyR+I0M4ed0dzwRYrnSirp30khC6KkW4pF3sA9wXNXl96wUVXt2NBwEyMYThvblXMr3jywkFDBcj+zg4wwQOI57XjzpdoxK53akpOxExs8FeCK+ea7tJDIblhGU5PD2yumEPhrjb91eG6BGFk9rr0VkLxgD6rvYOVQRNpMw/l1y8REMeUYoQmiI/f0TAMDcfSfbl8vOlQTcs4cP2RjLBwZMdSDtodBks7cKmseVY6oCKFkWTQ2St29PqGn5uDTmUctNzHjZqkaExLn7QDMrKM4sNaGGd2tL3RlQlIESvx4UqoS0O1OINnOQpD4+nl+ZwsxsVl/ZcdIsDrWE1LMEYOzo6tlztfySNpIV8tu309U08lT6jAVtXZh+vk+cdShOOxr4+rrIlPzMSfeei2rmbvIrLMUQgvuIIvFindyqkrVu+LH8PZBixVXa5Zk2q4brGzYDIqTDYU X6nUJHL8 0SjdvYs2WRkHIJaRZOW+e+mMgDqeZ4jM7hyudFpYbMT8vd+TB6TSe51MclIuPzlBWy/AH9f3vDmzdcI4g2CZtWGcbLmrCmOXyG1Ie/w5kXwCavQBDWQTYxTT0sVzyYi0lBF/Yk+QnkSReDEI6sR+j1XnUO6cQ5QRcfKeEQ3YatzYegKT5xgCxAuqXfQEHU1YU/XldVcu1s68WCE5bxrq9iJjIiIenRaRsu/LJERA69MxcbfmQIYWTHYSXgAD2NnKD684G9v3fEz107WUhrS6t4+lgEa7FtMOar+0RLISrSH2Gldf89hg+vBRhL+wx/WQ1vs/SnZIf2Ic7jKrafd0mvx3cIokSvEcKXyaHVXPTCD//6pZaDHRF4AgJd5IqxkQfNVDwf5YA5OYFFgF3kSuk0zMwGn0c8hyDsF6yXzbhCVwgMwXdPTFE2aKqvH3igY8eqWhy2BnXpZkOcD8bH4J5nmSdYA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Oct 17, 2023 at 6:40=E2=80=AFAM Andrew Morton wrote: > > On Mon, 16 Oct 2023 15:12:45 +0800 "zhaoyang.huang" wrote: > > > From: Zhaoyang Huang > > > > According to current CMA utilization policy, an alloc_pages(GFP_USER) > > could 'steal' UNMOVABLE & RECLAIMABLE page blocks via the help of > > CMA(pass zone_watermark_ok by counting CMA in but use U&R in rmqueue), > > which could lead to following alloc_pages(GFP_KERNEL) fail. > > Solving this by introducing second watermark checking for GFP_MOVABLE, > > which could have the allocation use CMA when proper. > > > > -- Free_pages(30MB) > > | > > | > > -- WMARK_LOW(25MB) > > | > > -- Free_CMA(12MB) > > | > > | > > -- > > > > Signed-off-by: Zhaoyang Huang > > --- > > v6: update comments > > The patch itself is identical to the v5 patch. So either you meant > "update changelog" above or you sent the wrong diff? sorry, should be update changelog > > Also, have we resolved any concerns regarding possible performance > impacts of this change? I don't think this commit could introduce performance impact as it actually adds one more path for using CMA page blocks in advance. __rmqueue(struct zone *zone, unsigned int order, int migratetype, if (IS_ENABLED(CONFIG_CMA)) { if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { + use_cma_first(zone, order, alloc_flags)) { //current '1/2' logic is kept while add a path for using CMA in advance than now. page =3D __rmqueue_cma_fallback(zone, order); if (page) return page; } } retry: //normal rmqueue_smallest path is not affected which could deemed as a fallback path for __rmqueue_cma_fallback failure page =3D __rmqueue_smallest(zone, order, migratetype); if (unlikely(!page)) { if (alloc_flags & ALLOC_CMA) page =3D __rmqueue_cma_fallback(zone, order); if (!page && __rmqueue_fallback(zone, order, migratetype, alloc_flags= )) goto retry; } return page; }