From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D07C3C3E8C5 for ; Fri, 27 Nov 2020 19:29:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 30F4D24101 for ; Fri, 27 Nov 2020 19:29:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Yg3hXJwb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 30F4D24101 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 646C26B0036; Fri, 27 Nov 2020 14:29:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F5FB6B005C; Fri, 27 Nov 2020 14:29:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E5EB6B0068; Fri, 27 Nov 2020 14:29:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id 36AC56B0036 for ; Fri, 27 Nov 2020 14:29:57 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EFF1A8249980 for ; Fri, 27 Nov 2020 19:29:56 +0000 (UTC) X-FDA: 77531188392.06.milk19_30026bd2738a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id C85401003CA87 for ; Fri, 27 Nov 2020 19:29:56 +0000 (UTC) X-HE-Tag: milk19_30026bd2738a X-Filterd-Recvd-Size: 7239 Received: from mail-vs1-f68.google.com (mail-vs1-f68.google.com [209.85.217.68]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Fri, 27 Nov 2020 19:29:56 +0000 (UTC) Received: by mail-vs1-f68.google.com with SMTP id j140so3070236vsd.4 for ; Fri, 27 Nov 2020 11:29:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=yPrYA8SBxOnDPyxFWgz0jFt0AY1VDfwj6eq7kEmSXHU=; b=Yg3hXJwbh3tGrVRU8sLrnOrJtjQBQk/wk4EmzcfxIsTMOK9dfyLhTk/JOlKvaw85+J B4NLUjn5w6ktG6bKMkip1sLQHfk1q3LkBDFTvYftTB5pBfdk88uD3cRpQYSfsSnnIkG9 V+pLTitUQnUebHfl142tFfBFozMlBAgpODKqgPi3uTEJg4VAueVTedB/N2TsaiH30wxM pYHo67SqH65qmv2BOGZ0Jj6Ccp1MJ01RYAFLWbPTgHwHxNc1MfkkEwO0XfaL/FTT3JGC vsgfTJkWgrbl5TjdU3wghHqPKO/js/i1YK+8jJH9F3g/urtjAyRh6zaoUl0W7BDDvYEG a+ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=yPrYA8SBxOnDPyxFWgz0jFt0AY1VDfwj6eq7kEmSXHU=; b=cqIQAxUT3cugmS95/jZHI2VjiRlECIjeoIiioD60f3Y2XZoQydvAhYUl6rtkb/9fFE XglR04Mrls3C06yuexZAzdATBA+p1BWG6lAwf6wVY86zhP27UrMm139t0X1T+k62PJYd zM4k4ac149CVYpPa08TYjJVOFimg7NhbojUphllSt9Wsd2o0rtqBwuoRVYpdx6KROaJj hAGOb9gQng9kXLhpEmPxhmn8BgJ7aAHaCPKgEz+dHCXqk0KzUGofqkv+A333gnCoq7TL agTVkZX4G9Dqwo0YInkRwDC1Blmp1MvbqEfZeekeR+9EKQnWMx6GW3EYguQRU8pfQxzc hZSA== X-Gm-Message-State: AOAM531Axle/mk4C9zY80knYgJgcHio+DhUYXIn90YrXEdCk4XgQc2P5 PRNITdX6xaosGBxx7oLJMWQ4ffmGfiqUVHQH4mQ= X-Google-Smtp-Source: ABdhPJwqvo4S0sElff72q48JHch4PU6XVMJO+9UHWqDGtb/ijXLDmTsHYZVW6ehY0qy/kzxEdqIkwB63ZoATUbypooE= X-Received: by 2002:a67:8e04:: with SMTP id q4mr7472043vsd.9.1606505395590; Fri, 27 Nov 2020 11:29:55 -0800 (PST) MIME-Version: 1.0 References: <20201127011747.86005-1-shihaitao1@huawei.com> In-Reply-To: <20201127011747.86005-1-shihaitao1@huawei.com> From: Souptick Joarder Date: Sat, 28 Nov 2020 00:59:58 +0530 Message-ID: Subject: Re: [PATCH] mm: fix some spelling mistakes in comments To: Haitao Shi Cc: Andrew Morton , rppt@kernel.org, Linux-MM , linux-kernel@vger.kernel.org, wangle6@huawei.com Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Nov 27, 2020 at 6:50 AM Haitao Shi wrote: > > Fix some spelling mistakes in comments: > udpate ==> update > succesful ==> successful > exmaple ==> example > unneccessary ==> unnecessary > stoping ==> stopping > uknown ==> unknown > > Signed-off-by: Haitao Shi Reviewed-by: Souptick Joarder > --- > mm/filemap.c | 2 +- > mm/huge_memory.c | 2 +- > mm/khugepaged.c | 2 +- > mm/memblock.c | 2 +- > mm/migrate.c | 2 +- > mm/page_ext.c | 2 +- > 6 files changed, 6 insertions(+), 6 deletions(-) > > diff --git a/mm/filemap.c b/mm/filemap.c > index 3ebbe64..8826c48 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -1359,7 +1359,7 @@ static int __wait_on_page_locked_async(struct page *page, > else > ret = PageLocked(page); > /* > - * If we were succesful now, we know we're still on the > + * If we were successful now, we know we're still on the > * waitqueue as we're still under the lock. This means it's > * safe to remove and return success, we know the callback > * isn't going to trigger. > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index ec2bb93..0fea0c2 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2356,7 +2356,7 @@ static void __split_huge_page_tail(struct page *head, int tail, > * Clone page flags before unfreezing refcount. > * > * After successful get_page_unless_zero() might follow flags change, > - * for exmaple lock_page() which set PG_waiters. > + * for example lock_page() which set PG_waiters. > */ > page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; > page_tail->flags |= (head->flags & > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 4e3dff1..d6f7ede 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -1273,7 +1273,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > * PTEs are armed with uffd write protection. > * Here we can also mark the new huge pmd as > * write protected if any of the small ones is > - * marked but that could bring uknown > + * marked but that could bring unknown > * userfault messages that falls outside of > * the registered range. So, just be simple. > */ > diff --git a/mm/memblock.c b/mm/memblock.c > index b68ee86..086662a 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -871,7 +871,7 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size) > * @base: base address of the region > * @size: size of the region > * @set: set or clear the flag > - * @flag: the flag to udpate > + * @flag: the flag to update > * > * This function isolates region [@base, @base + @size), and sets/clears flag > * > diff --git a/mm/migrate.c b/mm/migrate.c > index 5795cb8..8a3580c 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -2548,7 +2548,7 @@ static bool migrate_vma_check_page(struct page *page) > * will bump the page reference count. Sadly there is no way to > * differentiate a regular pin from migration wait. Hence to > * avoid 2 racing thread trying to migrate back to CPU to enter > - * infinite loop (one stoping migration because the other is > + * infinite loop (one stopping migration because the other is > * waiting on pte migration entry). We always return true here. > * > * FIXME proper solution is to rework migration_entry_wait() so > diff --git a/mm/page_ext.c b/mm/page_ext.c > index a3616f7..cf931eb 100644 > --- a/mm/page_ext.c > +++ b/mm/page_ext.c > @@ -34,7 +34,7 @@ > * > * The need callback is used to decide whether extended memory allocation is > * needed or not. Sometimes users want to deactivate some features in this > - * boot and extra memory would be unneccessary. In this case, to avoid > + * boot and extra memory would be unnecessary. In this case, to avoid > * allocating huge chunk of memory, each clients represent their need of > * extra memory through the need callback. If one of the need callbacks > * returns true, it means that someone needs extra memory so that > -- > 2.10.1 > >