From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FF2BC433E1 for ; Wed, 27 May 2020 06:46:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D15D52078C for ; Wed, 27 May 2020 06:46:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="TlJ9wcYi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D15D52078C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7CD09800B6; Wed, 27 May 2020 02:46:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 77F1480010; Wed, 27 May 2020 02:46:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6953B800B6; Wed, 27 May 2020 02:46:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id 5E93880010 for ; Wed, 27 May 2020 02:46:14 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 285D1181AC9CC for ; Wed, 27 May 2020 06:46:14 +0000 (UTC) X-FDA: 76861564668.18.sun26_3b16a3726d50 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id F283C100EDBD5 for ; Wed, 27 May 2020 06:46:13 +0000 (UTC) X-HE-Tag: sun26_3b16a3726d50 X-Filterd-Recvd-Size: 8165 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Wed, 27 May 2020 06:46:13 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id v63so11404861pfb.10 for ; Tue, 26 May 2020 23:46:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7JFHlUj79R0JVLwq3d/Y7eWo9W15UBnU9tI9eUuzxnE=; b=TlJ9wcYie1e+tvk5v3Z4qtoxNiG8tpHbu+y5nv/BqGPSQ1va6XPpsUUXQIYYQwrgT3 WzvMJGNiOCM6Mjr3wuHtTXlSGarBX5SbZRIjvKuIUTGPhXRJEYgbdkVe0D/YfyyTiz2c acWkCjxNOgd1rW7NM2G9k+TbZIXpyCWArOb+t94Jb1ZDkY2xMpX9g9X8Dyk1UtzJB4/f ASM4AXj+XNPz/JZfAjXlqRxWd5PkQcwNbCVlEuhMsWunPTD36Ybw5cVcC8F091DEHOUQ WS6l1kF8msyVq5Vl06i9t7OfueN5MsoOaJ2E9jZ5f0VzmGKeogahjPh0znlB+++pq1b8 6keQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7JFHlUj79R0JVLwq3d/Y7eWo9W15UBnU9tI9eUuzxnE=; b=ODhh+gu2iHPl4wAdjk4zat+LQ/UbduYgBRldLmgo/TEjUfw7OFYIy8oyhLB5j/lXWD j/Ds9w7MLndQ+HmkiY827n+foXEuS5R9XIvxEx/ZbuZ8strEaEOPNehVsqjGW+BRhtn7 Z8hdeF5h33xIN4l9jxUk+fmDoNjvPa/zeQNn4e/02NeUsXGhDRUmFsm9sWgf+FERGmFH VX6ns1MJnj3y6Zh44ZfIrl+2umgLxE1w6dICgjx/2F5b1sXT9ZkAJb+nJ1QD2jIXvSGW Xv1JHC0I3NXe0GmLTYv5/mJiz4PpzEA6j6/ZViVyqROX05LNk6mwDonCWqEdlYixFfJ/ uPzQ== X-Gm-Message-State: AOAM530yGnvSahTvoqPyfrUksV7sUiV2Nf40OVqras8NXLmVTqWydNqu cnsOGL3vlQ13X2dMmu2pIwc= X-Google-Smtp-Source: ABdhPJwwP+nJTOUtjokOeo3o7ujsXWVHXIXGtjVFvZn7FAMLLbDCpTF59s+9BOX5ZEO3X07Q88Ezpw== X-Received: by 2002:a63:e454:: with SMTP id i20mr2581612pgk.440.1590561972312; Tue, 26 May 2020 23:46:12 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id r13sm443883pgv.50.2020.05.26.23.46.09 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 May 2020 23:46:11 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v2 09/12] mm/migrate: make standard migration target allocation functions Date: Wed, 27 May 2020 15:45:00 +0900 Message-Id: <1590561903-13186-10-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1590561903-13186-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1590561903-13186-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: F283C100EDBD5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There are some similar functions for migration target allocation. Since there is no fundamental difference, it's better to keep just one rather than keeping all variants. This patch implements base migration target allocation function. In the following patches, variants will be converted to use this function. Note that PageHighmem() call in previous function is changed to open-code "is_highmem_idx()" since it provides more readability. Signed-off-by: Joonsoo Kim --- include/linux/migrate.h | 6 +++--- mm/memory-failure.c | 3 ++- mm/memory_hotplug.c | 3 ++- mm/migrate.c | 24 +++++++++++++----------- mm/page_isolation.c | 3 ++- 5 files changed, 22 insertions(+), 17 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 923c4f3..abf09b3 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -40,8 +40,8 @@ extern int migrate_page(struct address_space *mapping, enum migrate_mode mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, struct alloc_control *ac, enum migrate_mode mode, int reason); -extern struct page *new_page_nodemask(struct page *page, - struct alloc_control *ac); +extern struct page *alloc_migration_target(struct page *page, + struct alloc_control *ac); extern int isolate_movable_page(struct page *page, isolate_mode_t mode); extern void putback_movable_page(struct page *page); @@ -60,7 +60,7 @@ static inline int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, struct alloc_control *ac, enum migrate_mode mode, int reason) { return -ENOSYS; } -static inline struct page *new_page_nodemask(struct page *page, +static inline struct page *alloc_migration_target(struct page *page, struct alloc_control *ac) { return NULL; } static inline int isolate_movable_page(struct page *page, isolate_mode_t mode) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 0d5d59b..a75de67 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1639,9 +1639,10 @@ static struct page *new_page(struct page *p, struct alloc_control *__ac) struct alloc_control ac = { .nid = page_to_nid(p), .nmask = &node_states[N_MEMORY], + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, }; - return new_page_nodemask(p, &ac); + return alloc_migration_target(p, &ac); } /* diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 89642f9..185f4c9 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1249,7 +1249,8 @@ static struct page *new_node_page(struct page *page, struct alloc_control *__ac) ac.nid = nid; ac.nmask = &nmask; - return new_page_nodemask(page, &ac); + ac.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; + return alloc_migration_target(page, &ac); } static int diff --git a/mm/migrate.c b/mm/migrate.c index 9d6ed94..780135a 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1537,31 +1537,33 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, return rc; } -struct page *new_page_nodemask(struct page *page, struct alloc_control *ac) +struct page *alloc_migration_target(struct page *page, struct alloc_control *ac) { - gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; unsigned int order = 0; struct page *new_page = NULL; + int zidx; + /* hugetlb has it's own gfp handling logic */ if (PageHuge(page)) { struct hstate *h = page_hstate(compound_head(page)); - struct alloc_control __ac = { - .nid = ac->nid, - .nmask = ac->nmask, - }; - return alloc_huge_page_nodemask(h, &__ac); + return alloc_huge_page_nodemask(h, ac); } + ac->__gfp_mask = ac->gfp_mask; if (PageTransHuge(page)) { - gfp_mask |= GFP_TRANSHUGE; + ac->__gfp_mask |= GFP_TRANSHUGE; order = HPAGE_PMD_ORDER; } + zidx = zone_idx(page_zone(page)); + if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) + ac->__gfp_mask |= __GFP_HIGHMEM; - if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE)) - gfp_mask |= __GFP_HIGHMEM; + if (ac->skip_cma) + ac->__gfp_mask &= ~__GFP_MOVABLE; - new_page = __alloc_pages_nodemask(gfp_mask, order, ac->nid, ac->nmask); + new_page = __alloc_pages_nodemask(ac->__gfp_mask, order, + ac->nid, ac->nmask); if (new_page && PageTransHuge(new_page)) prep_transhuge_page(new_page); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 1e1828b..aba799d 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -303,7 +303,8 @@ struct page *alloc_migrate_target(struct page *page, struct alloc_control *__ac) struct alloc_control ac = { .nid = page_to_nid(page), .nmask = &node_states[N_MEMORY], + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, }; - return new_page_nodemask(page, &ac); + return alloc_migration_target(page, &ac); } -- 2.7.4