* [PATCH] mm: fix minor spelling mistakes in comments
@ 2025-12-18 15:09 klourencodev
2025-12-20 2:27 ` SeongJae Park
2025-12-21 11:17 ` David Hildenbrand (Red Hat)
0 siblings, 2 replies; 5+ messages in thread
From: klourencodev @ 2025-12-18 15:09 UTC (permalink / raw)
To: linux-mm; +Cc: akpm, david, Kevin Lourenco
From: Kevin Lourenco <klourencodev@gmail.com>
Correct several typos in comments across files in mm/
Signed-off-by: Kevin Lourenco <klourencodev@gmail.com>
---
mm/internal.h | 2 +-
mm/madvise.c | 2 +-
mm/memblock.c | 4 ++--
mm/memcontrol.c | 2 +-
mm/memory-failure.c | 2 +-
mm/memory-tiers.c | 2 +-
mm/memory.c | 4 ++--
mm/memory_hotplug.c | 4 ++--
mm/migrate_device.c | 4 ++--
mm/mm_init.c | 6 +++---
mm/mremap.c | 6 +++---
mm/mseal.c | 4 ++--
mm/numa_memblks.c | 2 +-
mm/page_alloc.c | 4 ++--
mm/page_io.c | 4 ++--
mm/page_isolation.c | 2 +-
mm/page_reporting.c | 2 +-
mm/swap.c | 2 +-
mm/swap.h | 2 +-
mm/swap_state.c | 2 +-
mm/swapfile.c | 2 +-
mm/userfaultfd.c | 4 ++--
mm/vma.c | 4 ++--
mm/vma.h | 8 ++++----
mm/vmscan.c | 2 +-
mm/vmstat.c | 2 +-
mm/zsmalloc.c | 2 +-
27 files changed, 43 insertions(+), 43 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index e430da900430..db4e97489f66 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -171,7 +171,7 @@ static inline int mmap_file(struct file *file, struct vm_area_struct *vma)
/*
* OK, we tried to call the file hook for mmap(), but an error
- * arose. The mapping is in an inconsistent state and we most not invoke
+ * arose. The mapping is in an inconsistent state and we must not invoke
* any further hooks on it.
*/
vma->vm_ops = &vma_dummy_vm_ops;
diff --git a/mm/madvise.c b/mm/madvise.c
index 6bf7009fa5ce..863d55b8a658 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1867,7 +1867,7 @@ static bool is_valid_madvise(unsigned long start, size_t len_in, int behavior)
* madvise_should_skip() - Return if the request is invalid or nothing.
* @start: Start address of madvise-requested address range.
* @len_in: Length of madvise-requested address range.
- * @behavior: Requested madvise behavor.
+ * @behavior: Requested madvise behavior.
* @err: Pointer to store an error code from the check.
*
* If the specified behaviour is invalid or nothing would occur, we skip the
diff --git a/mm/memblock.c b/mm/memblock.c
index 905d06b16348..e76255e4ff36 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -773,7 +773,7 @@ bool __init_memblock memblock_validate_numa_coverage(unsigned long threshold_byt
unsigned long start_pfn, end_pfn, mem_size_mb;
int nid, i;
- /* calculate lose page */
+ /* calculate lost page */
for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
if (!numa_valid_node(nid))
nr_pages += end_pfn - start_pfn;
@@ -2414,7 +2414,7 @@ EXPORT_SYMBOL_GPL(reserve_mem_find_by_name);
/**
* reserve_mem_release_by_name - Release reserved memory region with a given name
- * @name: The name that is attatched to a reserved memory region
+ * @name: The name that is attached to a reserved memory region
*
* Forcibly release the pages in the reserved memory region so that those memory
* can be used as free memory. After released the reserved region size becomes 0.
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index a01d3e6c157d..75fc22a33b28 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4976,7 +4976,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
memcg = folio_memcg(old);
/*
* Note that it is normal to see !memcg for a hugetlb folio.
- * For e.g, itt could have been allocated when memory_hugetlb_accounting
+ * For e.g, it could have been allocated when memory_hugetlb_accounting
* was not selected.
*/
VM_WARN_ON_ONCE_FOLIO(!folio_test_hugetlb(old) && !memcg, old);
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 8565cf979091..5a88985e29b7 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -864,7 +864,7 @@ static int kill_accessing_process(struct task_struct *p, unsigned long pfn,
*
* MF_RECOVERED - The m-f() handler marks the page as PG_hwpoisoned'ed.
* The page has been completely isolated, that is, unmapped, taken out of
- * the buddy system, or hole-punnched out of the file mapping.
+ * the buddy system, or hole-punched out of the file mapping.
*/
static const char *action_name[] = {
[MF_IGNORED] = "Ignored",
diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
index 864811fff409..20aab9c19c5e 100644
--- a/mm/memory-tiers.c
+++ b/mm/memory-tiers.c
@@ -648,7 +648,7 @@ void clear_node_memory_type(int node, struct memory_dev_type *memtype)
if (node_memory_types[node].memtype == memtype || !memtype)
node_memory_types[node].map_count--;
/*
- * If we umapped all the attached devices to this node,
+ * If we unmapped all the attached devices to this node,
* clear the node memory type.
*/
if (!node_memory_types[node].map_count) {
diff --git a/mm/memory.c b/mm/memory.c
index d1cd2d9e1656..c8e67504bae4 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5932,7 +5932,7 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf,
else
*last_cpupid = folio_last_cpupid(folio);
- /* Record the current PID acceesing VMA */
+ /* Record the current PID accessing VMA */
vma_set_access_pid_bit(vma);
count_vm_numa_event(NUMA_HINT_FAULTS);
@@ -6251,7 +6251,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
* Use the maywrite version to indicate that vmf->pte may be
* modified, but since we will use pte_same() to detect the
* change of the !pte_none() entry, there is no need to recheck
- * the pmdval. Here we chooes to pass a dummy variable instead
+ * the pmdval. Here we choose to pass a dummy variable instead
* of NULL, which helps new user think about why this place is
* special.
*/
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a63ec679d861..389989a28abe 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -926,7 +926,7 @@ static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn
*
* MOVABLE : KERNEL_EARLY
*
- * Whereby KERNEL_EARLY is memory in one of the kernel zones, available sinze
+ * Whereby KERNEL_EARLY is memory in one of the kernel zones, available since
* boot. We base our calculation on KERNEL_EARLY internally, because:
*
* a) Hotplugged memory in one of the kernel zones can sometimes still get
@@ -1258,7 +1258,7 @@ static pg_data_t *hotadd_init_pgdat(int nid)
* NODE_DATA is preallocated (free_area_init) but its internal
* state is not allocated completely. Add missing pieces.
* Completely offline nodes stay around and they just need
- * reintialization.
+ * reinitialization.
*/
pgdat = NODE_DATA(nid);
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 0346c2d7819f..0a8b31939640 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -1419,10 +1419,10 @@ EXPORT_SYMBOL(migrate_device_range);
/**
* migrate_device_pfns() - migrate device private pfns to normal memory.
- * @src_pfns: pre-popluated array of source device private pfns to migrate.
+ * @src_pfns: pre-populated array of source device private pfns to migrate.
* @npages: number of pages to migrate.
*
- * Similar to migrate_device_range() but supports non-contiguous pre-popluated
+ * Similar to migrate_device_range() but supports non-contiguous pre-populated
* array of device pages to migrate.
*/
int migrate_device_pfns(unsigned long *src_pfns, unsigned long npages)
diff --git a/mm/mm_init.c b/mm/mm_init.c
index d86248566a56..0927bedb1254 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -187,7 +187,7 @@ void mm_compute_batch(int overcommit_policy)
/*
* For policy OVERCOMMIT_NEVER, set batch size to 0.4% of
* (total memory/#cpus), and lift it to 25% for other policies
- * to easy the possible lock contention for percpu_counter
+ * to ease the possible lock contention for percpu_counter
* vm_committed_as, while the max limit is INT_MAX
*/
if (overcommit_policy == OVERCOMMIT_NEVER)
@@ -1745,7 +1745,7 @@ static void __init free_area_init_node(int nid)
lru_gen_init_pgdat(pgdat);
}
-/* Any regular or high memory on that node ? */
+/* Any regular or high memory on that node? */
static void __init check_for_memory(pg_data_t *pgdat)
{
enum zone_type zone_type;
@@ -2045,7 +2045,7 @@ static unsigned long __init deferred_init_pages(struct zone *zone,
* Initialize and free pages.
*
* At this point reserved pages and struct pages that correspond to holes in
- * memblock.memory are already intialized so every free range has a valid
+ * memblock.memory are already initialized so every free range has a valid
* memory map around it.
* This ensures that access of pages that are ahead of the range being
* initialized (computing buddy page in __free_one_page()) always reads a valid
diff --git a/mm/mremap.c b/mm/mremap.c
index 8275b9772ec1..8391ae17de64 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -678,7 +678,7 @@ static bool can_realign_addr(struct pagetable_move_control *pmc,
/*
* We don't want to have to go hunting for VMAs from the end of the old
* VMA to the next page table boundary, also we want to make sure the
- * operation is wortwhile.
+ * operation is worthwhile.
*
* So ensure that we only perform this realignment if the end of the
* range being copied reaches or crosses the page table boundary.
@@ -926,7 +926,7 @@ static bool vrm_overlaps(struct vma_remap_struct *vrm)
/*
* Will a new address definitely be assigned? This either if the user specifies
* it via MREMAP_FIXED, or if MREMAP_DONTUNMAP is used, indicating we will
- * always detemrine a target address.
+ * always determine a target address.
*/
static bool vrm_implies_new_addr(struct vma_remap_struct *vrm)
{
@@ -1806,7 +1806,7 @@ static unsigned long check_mremap_params(struct vma_remap_struct *vrm)
/*
* move_vma() need us to stay 4 maps below the threshold, otherwise
* it will bail out at the very beginning.
- * That is a problem if we have already unmaped the regions here
+ * That is a problem if we have already unmapped the regions here
* (new_addr, and old_addr), because userspace will not know the
* state of the vma's after it gets -ENOMEM.
* So, to avoid such scenario we can pre-compute if the whole
diff --git a/mm/mseal.c b/mm/mseal.c
index ae442683c5c0..316b5e1dec78 100644
--- a/mm/mseal.c
+++ b/mm/mseal.c
@@ -21,7 +21,7 @@
* It disallows unmapped regions from start to end whether they exist at the
* start, in the middle, or at the end of the range, or any combination thereof.
*
- * This is because after sealng a range, there's nothing to stop memory mapping
+ * This is because after sealing a range, there's nothing to stop memory mapping
* of ranges in the remaining gaps later, meaning that the user might then
* wrongly consider the entirety of the mseal()'d range to be sealed when it
* in fact isn't.
@@ -124,7 +124,7 @@ static int mseal_apply(struct mm_struct *mm,
* -EINVAL:
* invalid input flags.
* start address is not page aligned.
- * Address arange (start + len) overflow.
+ * Address range (start + len) overflow.
* -ENOMEM:
* addr is not a valid address (not allocated).
* end (start + len) is not a valid address.
diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
index 5b009a9cd8b4..7779506fd29e 100644
--- a/mm/numa_memblks.c
+++ b/mm/numa_memblks.c
@@ -465,7 +465,7 @@ int __init numa_memblks_init(int (*init_func)(void),
* We reset memblock back to the top-down direction
* here because if we configured ACPI_NUMA, we have
* parsed SRAT in init_func(). It is ok to have the
- * reset here even if we did't configure ACPI_NUMA
+ * reset here even if we didn't configure ACPI_NUMA
* or acpi numa init fails and fallbacks to dummy
* numa init.
*/
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7ab35cef3cae..8a7d3a118c5e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1829,7 +1829,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
/*
* As memory initialization might be integrated into KASAN,
- * KASAN unpoisoning and memory initializion code must be
+ * KASAN unpoisoning and memory initialization code must be
* kept together to avoid discrepancies in behavior.
*/
@@ -7629,7 +7629,7 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned
* unsafe in NMI. If spin_trylock() is called from hard IRQ the current
* task may be waiting for one rt_spin_lock, but rt_spin_trylock() will
* mark the task as the owner of another rt_spin_lock which will
- * confuse PI logic, so return immediately if called form hard IRQ or
+ * confuse PI logic, so return immediately if called from hard IRQ or
* NMI.
*
* Note, irqs_disabled() case is ok. This function can be called
diff --git a/mm/page_io.c b/mm/page_io.c
index 3c342db77ce3..a2c034660c80 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -450,14 +450,14 @@ void __swap_writepage(struct folio *folio, struct swap_iocb **swap_plug)
VM_BUG_ON_FOLIO(!folio_test_swapcache(folio), folio);
/*
- * ->flags can be updated non-atomicially (scan_swap_map_slots),
+ * ->flags can be updated non-atomically (scan_swap_map_slots),
* but that will never affect SWP_FS_OPS, so the data_race
* is safe.
*/
if (data_race(sis->flags & SWP_FS_OPS))
swap_writepage_fs(folio, swap_plug);
/*
- * ->flags can be updated non-atomicially (scan_swap_map_slots),
+ * ->flags can be updated non-atomically (scan_swap_map_slots),
* but that will never affect SWP_SYNCHRONOUS_IO, so the data_race
* is safe.
*/
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index f72b6cd38b95..b5924eff4f8b 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -301,7 +301,7 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
* pageblock. When not all pageblocks within a page are isolated at the same
* time, free page accounting can go wrong. For example, in the case of
* MAX_PAGE_ORDER = pageblock_order + 1, a MAX_PAGE_ORDER page has two
- * pagelbocks.
+ * pageblocks.
* [ MAX_PAGE_ORDER ]
* [ pageblock0 | pageblock1 ]
* When either pageblock is isolated, if it is a free page, the page is not
diff --git a/mm/page_reporting.c b/mm/page_reporting.c
index e4c428e61d8c..8a03effda749 100644
--- a/mm/page_reporting.c
+++ b/mm/page_reporting.c
@@ -123,7 +123,7 @@ page_reporting_drain(struct page_reporting_dev_info *prdev,
continue;
/*
- * If page was not comingled with another page we can
+ * If page was not commingled with another page we can
* consider the result to be "reported" since the page
* hasn't been modified, otherwise we will need to
* report on the new larger page when we make our way
diff --git a/mm/swap.c b/mm/swap.c
index 2260dcd2775e..bb19ccbece46 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -513,7 +513,7 @@ void folio_add_lru(struct folio *folio)
EXPORT_SYMBOL(folio_add_lru);
/**
- * folio_add_lru_vma() - Add a folio to the appropate LRU list for this VMA.
+ * folio_add_lru_vma() - Add a folio to the appropriate LRU list for this VMA.
* @folio: The folio to be added to the LRU.
* @vma: VMA in which the folio is mapped.
*
diff --git a/mm/swap.h b/mm/swap.h
index d034c13d8dd2..3dcf198b05e3 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -236,7 +236,7 @@ static inline bool folio_matches_swap_entry(const struct folio *folio,
/*
* All swap cache helpers below require the caller to ensure the swap entries
- * used are valid and stablize the device by any of the following ways:
+ * used are valid and stabilize the device by any of the following ways:
* - Hold a reference by get_swap_device(): this ensures a single entry is
* valid and increases the swap device's refcount.
* - Locking a folio in the swap cache: this ensures the folio's swap entries
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 5f97c6ae70a2..c6f661436c9a 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -82,7 +82,7 @@ void show_swap_cache_info(void)
* Context: Caller must ensure @entry is valid and protect the swap device
* with reference count or locks.
* Return: Returns the found folio on success, NULL otherwise. The caller
- * must lock nd check if the folio still matches the swap entry before
+ * must lock and check if the folio still matches the swap entry before
* use (e.g., folio_matches_swap_entry).
*/
struct folio *swap_cache_get_folio(swp_entry_t entry)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 46d2008e4b99..76273ad26739 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -2018,7 +2018,7 @@ swp_entry_t get_swap_page_of_type(int type)
if (get_swap_device_info(si)) {
if (si->flags & SWP_WRITEOK) {
/*
- * Grab the local lock to be complaint
+ * Grab the local lock to be compliant
* with swap table allocation.
*/
local_lock(&percpu_swap_cluster.lock);
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index b11f81095fa5..d270d5377630 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -1274,7 +1274,7 @@ static long move_pages_ptes(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd
* Use the maywrite version to indicate that dst_pte will be modified,
* since dst_pte needs to be none, the subsequent pte_same() check
* cannot prevent the dst_pte page from being freed concurrently, so we
- * also need to abtain dst_pmdval and recheck pmd_same() later.
+ * also need to obtain dst_pmdval and recheck pmd_same() later.
*/
dst_pte = pte_offset_map_rw_nolock(mm, dst_pmd, dst_addr, &dst_pmdval,
&dst_ptl);
@@ -1330,7 +1330,7 @@ static long move_pages_ptes(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd
goto out;
}
- /* If PTE changed after we locked the folio them start over */
+ /* If PTE changed after we locked the folio then start over */
if (src_folio && unlikely(!pte_same(src_folio_pte, orig_src_pte))) {
ret = -EAGAIN;
goto out;
diff --git a/mm/vma.c b/mm/vma.c
index fc90befd162f..bf62ac1c52ad 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -2909,8 +2909,8 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info)
/*
* Adjust for the gap first so it doesn't interfere with the
* later alignment. The first step is the minimum needed to
- * fulill the start gap, the next steps is the minimum to align
- * that. It is the minimum needed to fulill both.
+ * fulfill the start gap, the next steps is the minimum to align
+ * that. It is the minimum needed to fulfill both.
*/
gap = vma_iter_addr(&vmi) + info->start_gap;
gap += (info->align_offset - gap) & info->align_mask;
diff --git a/mm/vma.h b/mm/vma.h
index abada6a64c4e..de817dc695b6 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -264,7 +264,7 @@ void unmap_region(struct ma_state *mas, struct vm_area_struct *vma,
struct vm_area_struct *prev, struct vm_area_struct *next);
/**
- * vma_modify_flags() - Peform any necessary split/merge in preparation for
+ * vma_modify_flags() - Perform any necessary split/merge in preparation for
* setting VMA flags to *@vm_flags in the range @start to @end contained within
* @vma.
* @vmi: Valid VMA iterator positioned at @vma.
@@ -292,7 +292,7 @@ __must_check struct vm_area_struct *vma_modify_flags(struct vma_iterator *vmi,
vm_flags_t *vm_flags_ptr);
/**
- * vma_modify_name() - Peform any necessary split/merge in preparation for
+ * vma_modify_name() - Perform any necessary split/merge in preparation for
* setting anonymous VMA name to @new_name in the range @start to @end contained
* within @vma.
* @vmi: Valid VMA iterator positioned at @vma.
@@ -316,7 +316,7 @@ __must_check struct vm_area_struct *vma_modify_name(struct vma_iterator *vmi,
struct anon_vma_name *new_name);
/**
- * vma_modify_policy() - Peform any necessary split/merge in preparation for
+ * vma_modify_policy() - Perform any necessary split/merge in preparation for
* setting NUMA policy to @new_pol in the range @start to @end contained
* within @vma.
* @vmi: Valid VMA iterator positioned at @vma.
@@ -340,7 +340,7 @@ __must_check struct vm_area_struct *vma_modify_policy(struct vma_iterator *vmi,
struct mempolicy *new_pol);
/**
- * vma_modify_flags_uffd() - Peform any necessary split/merge in preparation for
+ * vma_modify_flags_uffd() - Perform any necessary split/merge in preparation for
* setting VMA flags to @vm_flags and UFFD context to @new_ctx in the range
* @start to @end contained within @vma.
* @vmi: Valid VMA iterator positioned at @vma.
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 77018534a7c9..8bdb1629b6eb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1063,7 +1063,7 @@ static bool may_enter_fs(struct folio *folio, gfp_t gfp_mask)
/*
* We can "enter_fs" for swap-cache with only __GFP_IO
* providing this isn't SWP_FS_OPS.
- * ->flags can be updated non-atomicially (scan_swap_map_slots),
+ * ->flags can be updated non-atomically (scan_swap_map_slots),
* but that will never affect SWP_FS_OPS, so the data_race
* is safe.
*/
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 65de88cdf40e..bd2af431ff86 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1626,7 +1626,7 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
}
}
-/* Print out the free pages at each order for each migatetype */
+/* Print out the free pages at each order for each migratetype */
static void pagetypeinfo_showfree(struct seq_file *m, void *arg)
{
int order;
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5bf832f9c05c..84da164dcbc5 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -105,7 +105,7 @@
/*
* On systems with 4K page size, this gives 255 size classes! There is a
- * trader-off here:
+ * trade-off here:
* - Large number of size classes is potentially wasteful as free page are
* spread across these classes
* - Small number of size classes causes large internal fragmentation
--
2.47.3
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm: fix minor spelling mistakes in comments
2025-12-18 15:09 [PATCH] mm: fix minor spelling mistakes in comments klourencodev
@ 2025-12-20 2:27 ` SeongJae Park
2025-12-20 3:49 ` Kevin Lourenco
2025-12-21 11:17 ` David Hildenbrand (Red Hat)
1 sibling, 1 reply; 5+ messages in thread
From: SeongJae Park @ 2025-12-20 2:27 UTC (permalink / raw)
To: klourencodev; +Cc: SeongJae Park, linux-mm, akpm, david
On Thu, 18 Dec 2025 16:09:06 +0100 klourencodev@gmail.com wrote:
> From: Kevin Lourenco <klourencodev@gmail.com>
>
> Correct several typos in comments across files in mm/
>
> Signed-off-by: Kevin Lourenco <klourencodev@gmail.com>
> ---
[...]
> diff --git a/mm/vma.c b/mm/vma.c
> index fc90befd162f..bf62ac1c52ad 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -2909,8 +2909,8 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info)
> /*
> * Adjust for the gap first so it doesn't interfere with the
> * later alignment. The first step is the minimum needed to
> - * fulill the start gap, the next steps is the minimum to align
> - * that. It is the minimum needed to fulill both.
> + * fulfill the start gap, the next steps is the minimum to align
> + * that. It is the minimum needed to fulfill both.
Nit. s/next steps is/next steps are/ ?
I'm not sure if that is really grammartically correct. I'm just asking. Even
if I'm correct, I wouldn't argue you have to fix it right now. So please feel
free to ignore my comment.
Reviewed-by: SeongJae Park <sj@kernel.org>
Thanks,
SJ
[...]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm: fix minor spelling mistakes in comments
2025-12-20 2:27 ` SeongJae Park
@ 2025-12-20 3:49 ` Kevin Lourenco
2025-12-20 3:56 ` SeongJae Park
0 siblings, 1 reply; 5+ messages in thread
From: Kevin Lourenco @ 2025-12-20 3:49 UTC (permalink / raw)
To: SeongJae Park; +Cc: linux-mm, akpm, david
Hi SJ,
I think you’re right — the wording there is probably not correct. If
anything, I would be more inclined to use “the next step is”
(singular), since there is only one alignment step following the start
gap adjustment.
That said, I intentionally limited this patch to clear typos. While
going through mm/, I did notice a number of grammatical issues in
comments, but I chose to ignore them to keep the scope focused cause
some typos actually made me wonder whether they were kernel-specific
terminology or just misspellings (e.g. “sinze boot”).
So I’d prefer to keep this patch dedicated to obvious typos only, and
leave grammar cleanups for future patches.
Thanks a lot for the review!
Best regards,
Kevin
Le sam. 20 déc. 2025 à 03:27, SeongJae Park <sj@kernel.org> a écrit :
>
> On Thu, 18 Dec 2025 16:09:06 +0100 klourencodev@gmail.com wrote:
>
> > From: Kevin Lourenco <klourencodev@gmail.com>
> >
> > Correct several typos in comments across files in mm/
> >
> > Signed-off-by: Kevin Lourenco <klourencodev@gmail.com>
> > ---
> [...]
> > diff --git a/mm/vma.c b/mm/vma.c
> > index fc90befd162f..bf62ac1c52ad 100644
> > --- a/mm/vma.c
> > +++ b/mm/vma.c
> > @@ -2909,8 +2909,8 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info)
> > /*
> > * Adjust for the gap first so it doesn't interfere with the
> > * later alignment. The first step is the minimum needed to
> > - * fulill the start gap, the next steps is the minimum to align
> > - * that. It is the minimum needed to fulill both.
> > + * fulfill the start gap, the next steps is the minimum to align
> > + * that. It is the minimum needed to fulfill both.
>
> Nit. s/next steps is/next steps are/ ?
>
> I'm not sure if that is really grammartically correct. I'm just asking. Even
> if I'm correct, I wouldn't argue you have to fix it right now. So please feel
> free to ignore my comment.
>
> Reviewed-by: SeongJae Park <sj@kernel.org>
>
>
> Thanks,
> SJ
>
> [...]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm: fix minor spelling mistakes in comments
2025-12-20 3:49 ` Kevin Lourenco
@ 2025-12-20 3:56 ` SeongJae Park
0 siblings, 0 replies; 5+ messages in thread
From: SeongJae Park @ 2025-12-20 3:56 UTC (permalink / raw)
To: Kevin Lourenco; +Cc: SeongJae Park, linux-mm, akpm, david
On Sat, 20 Dec 2025 04:49:27 +0100 Kevin Lourenco <klourencodev@gmail.com> wrote:
> Hi SJ,
>
> I think you’re right — the wording there is probably not correct. If
> anything, I would be more inclined to use “the next step is”
> (singular), since there is only one alignment step following the start
> gap adjustment.
>
> That said, I intentionally limited this patch to clear typos. While
> going through mm/, I did notice a number of grammatical issues in
> comments, but I chose to ignore them to keep the scope focused cause
> some typos actually made me wonder whether they were kernel-specific
> terminology or just misspellings (e.g. “sinze boot”).
>
> So I’d prefer to keep this patch dedicated to obvious typos only, and
> leave grammar cleanups for future patches.
That makes sense to me :)
>
> Thanks a lot for the review!
My pleasure!
Thanks,
SJ
[...]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm: fix minor spelling mistakes in comments
2025-12-18 15:09 [PATCH] mm: fix minor spelling mistakes in comments klourencodev
2025-12-20 2:27 ` SeongJae Park
@ 2025-12-21 11:17 ` David Hildenbrand (Red Hat)
1 sibling, 0 replies; 5+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-12-21 11:17 UTC (permalink / raw)
To: klourencodev, linux-mm; +Cc: akpm
On 12/18/25 16:09, klourencodev@gmail.com wrote:
> From: Kevin Lourenco <klourencodev@gmail.com>
>
> Correct several typos in comments across files in mm/
>
> Signed-off-by: Kevin Lourenco <klourencodev@gmail.com>
> ---
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
--
Cheers
David
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-12-21 11:18 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-18 15:09 [PATCH] mm: fix minor spelling mistakes in comments klourencodev
2025-12-20 2:27 ` SeongJae Park
2025-12-20 3:49 ` Kevin Lourenco
2025-12-20 3:56 ` SeongJae Park
2025-12-21 11:17 ` David Hildenbrand (Red Hat)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox