From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 446E9C433B4 for ; Mon, 19 Apr 2021 08:23:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9429561104 for ; Mon, 19 Apr 2021 08:23:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9429561104 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0A3F76B0036; Mon, 19 Apr 2021 04:23:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 056636B006E; Mon, 19 Apr 2021 04:23:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E37CB6B0070; Mon, 19 Apr 2021 04:23:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0136.hostedemail.com [216.40.44.136]) by kanga.kvack.org (Postfix) with ESMTP id C27F76B0036 for ; Mon, 19 Apr 2021 04:23:42 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 89FED4DBD for ; Mon, 19 Apr 2021 08:23:42 +0000 (UTC) X-FDA: 78048427884.22.B294C63 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf16.hostedemail.com (Postfix) with ESMTP id 78B9E80192D5 for ; Mon, 19 Apr 2021 08:23:40 +0000 (UTC) Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4FP0CV1GxLzlWv2; Mon, 19 Apr 2021 16:21:42 +0800 (CST) Received: from huawei.com (10.175.104.175) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.498.0; Mon, 19 Apr 2021 16:23:27 +0800 From: Shijie Luo To: CC: , , , Subject: [PATCH] mm: fix some typos and code style problems Date: Mon, 19 Apr 2021 04:22:37 -0400 Message-ID: <20210419082237.61206-1-luoshijie1@huawei.com> X-Mailer: git-send-email 2.19.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.104.175] X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 78B9E80192D5 X-Stat-Signature: 7sxysghyyo1jwzemzem8dr9eq65fqu6i X-Rspamd-Server: rspam02 Received-SPF: none (huawei.com>: No applicable sender policy available) receiver=imf16; identity=mailfrom; envelope-from=""; helo=szxga06-in.huawei.com; client-ip=45.249.212.32 X-HE-DKIM-Result: none/none X-HE-Tag: 1618820620-387494 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: fix some typos and code style problems in mm. gfp.h: s/MAXNODES/MAX_NUMNODES mmzone.h: s/then/than rmap.c: s/__vma_split()/__vma_adjust() swap.c: s/__mod_zone_page_stat/__mod_zone_page_state, s/is is/is swap_state.c: s/whoes/whose zsfold.c: code style problem fix in z3fold_unregister_migration zsmalloc.c: s/of/or, s/give/given Signed-off-by: Shijie Luo Signed-off-by: Miaohe Lin --- include/linux/gfp.h | 2 +- include/linux/mmzone.h | 2 +- mm/rmap.c | 2 +- mm/swap.c | 4 ++-- mm/swap_state.c | 2 +- mm/z3fold.c | 2 +- mm/zsmalloc.c | 4 ++-- 7 files changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 8572a1474e16..5f597df8da98 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -490,7 +490,7 @@ static inline int gfp_zonelist(gfp_t flags) =20 /* * We get the zone list from the current node and the gfp_mask. - * This zone list contains a maximum of MAXNODES*MAX_NR_ZONES zones. + * This zone list contains a maximum of MAX_NUMNODES*MAX_NR_ZONES zones. * There are two zonelists per node, one for all zones with memory and * one containing just zones from the node the zonelist belongs to. * diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 47946cec7584..5fd14fd85d4c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -55,7 +55,7 @@ enum migratetype { * pageblocks to MIGRATE_CMA which can be done by * __free_pageblock_cma() function. What is important though * is that a range of pageblocks must be aligned to - * MAX_ORDER_NR_PAGES should biggest page be bigger then + * MAX_ORDER_NR_PAGES should biggest page be bigger than * a single pageblock. */ MIGRATE_CMA, diff --git a/mm/rmap.c b/mm/rmap.c index b0fc27e77d6d..693a610e181d 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -257,7 +257,7 @@ static inline void unlock_anon_vma_root(struct anon_v= ma *root) * Attach the anon_vmas from src to dst. * Returns 0 on success, -ENOMEM on failure. * - * anon_vma_clone() is called by __vma_split(), __split_vma(), copy_vma(= ) and + * anon_vma_clone() is called by __vma_adjust(), __split_vma(), copy_vma= () and * anon_vma_fork(). The first three want an exact copy of src, while the= last * one, anon_vma_fork(), may try to reuse an existing anon_vma to preven= t * endless growth of anon_vma. Since dst->anon_vma is set to NULL before= call, diff --git a/mm/swap.c b/mm/swap.c index 31b844d4ed94..9e0028b01b97 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -483,7 +483,7 @@ void lru_cache_add_inactive_or_unevictable(struct pag= e *page, if (unlikely(unevictable) && !TestSetPageMlocked(page)) { int nr_pages =3D thp_nr_pages(page); /* - * We use the irq-unsafe __mod_zone_page_stat because this + * We use the irq-unsafe __mod_zone_page_state because this * counter is not modified from interrupt context, and the pte * lock is held(spinlock), which implies preemption disabled. */ @@ -794,7 +794,7 @@ void lru_add_drain_all(void) * below which drains the page vectors. * * Let x, y, and z represent some system CPU numbers, where x < y < z. - * Assume CPU #z is is in the middle of the for_each_online_cpu loop + * Assume CPU #z is in the middle of the for_each_online_cpu loop * below and has already reached CPU #y's per-cpu data. CPU #x comes * along, adds some pages to its per-cpu vectors, then calls * lru_add_drain_all(). diff --git a/mm/swap_state.c b/mm/swap_state.c index 3cdee7b11da9..5d1fafffee4e 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -797,7 +797,7 @@ static void swap_ra_info(struct vm_fault *vmf, * * Returns the struct page for entry and addr, after queueing swapin. * - * Primitive swap readahead code. We simply read in a few pages whoes + * Primitive swap readahead code. We simply read in a few pages whose * virtual addresses are around the fault address in the same vma. * * Caller must hold read mmap_lock if vmf->vma is not NULL. diff --git a/mm/z3fold.c b/mm/z3fold.c index 9d889ad2bb86..7fe7adaaad01 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -391,7 +391,7 @@ static void z3fold_unregister_migration(struct z3fold= _pool *pool) { if (pool->inode) iput(pool->inode); - } +} =20 /* Initializes the z3fold header of a newly allocated z3fold page */ static struct z3fold_header *init_z3fold_page(struct page *page, bool he= adless, diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 30c358b72025..412e0f95e2c1 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -61,7 +61,7 @@ #define ZSPAGE_MAGIC 0x58 =20 /* - * This must be power of 2 and greater than of equal to sizeof(link_free= ). + * This must be power of 2 and greater than or equal to sizeof(link_free= ). * These two conditions ensure that any 'struct link_free' itself doesn'= t * span more than 1 page which avoids complex case of mapping 2 pages si= mply * to restore link_free pointer values. @@ -530,7 +530,7 @@ static void set_zspage_mapping(struct zspage *zspage, * class maintains a list of zspages where each zspage is divided * into equal sized chunks. Each allocation falls into one of these * classes depending on its size. This function returns index of the - * size class which has chunk size big enough to hold the give size. + * size class which has chunk size big enough to hold the given size. */ static int get_size_class_index(int size) { --=20 2.19.1