linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: kernel test robot <lkp@intel.com>
To: Nhat Pham <nphamcs@gmail.com>, linux-mm@kvack.org
Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev,
	akpm@linux-foundation.org, hannes@cmpxchg.org, hughd@google.com,
	yosry.ahmed@linux.dev, mhocko@kernel.org,
	roman.gushchin@linux.dev, shakeel.butt@linux.dev,
	muchun.song@linux.dev, len.brown@intel.com,
	chengming.zhou@linux.dev, kasong@tencent.com, chrisl@kernel.org,
	huang.ying.caritas@gmail.com, ryan.roberts@arm.com,
	shikemeng@huaweicloud.com, viro@zeniv.linux.org.uk,
	baohua@kernel.org, bhe@redhat.com, osalvador@suse.de,
	lorenzo.stoakes@oracle.com, christophe.leroy@csgroup.eu,
	pavel@kernel.org, kernel-team@meta.com,
	linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
	linux-pm@vger.kernel.org, peterx@redhat.com, riel@surriel.com,
	joshua.hahnjy@gmail.com
Subject: Re: [PATCH v3 18/20] memcg: swap: only charge physical swap slots
Date: Mon, 9 Feb 2026 10:12:02 +0800	[thread overview]
Message-ID: <202602091006.0jXoavPW-lkp@intel.com> (raw)
In-Reply-To: <20260208215839.87595-19-nphamcs@gmail.com>

Hi Nhat,

kernel test robot noticed the following build errors:

[auto build test ERROR on linus/master]
[also build test ERROR on v6.19]
[cannot apply to akpm-mm/mm-everything tj-cgroup/for-next tip/smp/core next-20260205]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Nhat-Pham/swap-rearrange-the-swap-header-file/20260209-065842
base:   linus/master
patch link:    https://lore.kernel.org/r/20260208215839.87595-19-nphamcs%40gmail.com
patch subject: [PATCH v3 18/20] memcg: swap: only charge physical swap slots
config: sparc64-defconfig (https://download.01.org/0day-ci/archive/20260209/202602091006.0jXoavPW-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260209/202602091006.0jXoavPW-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602091006.0jXoavPW-lkp@intel.com/

All errors (new ones prefixed by >>):

>> mm/vswap.c:637:2: error: call to undeclared function 'mem_cgroup_clear_swap'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
     637 |         mem_cgroup_clear_swap(entry, 1);
         |         ^
   mm/vswap.c:637:2: note: did you mean 'mem_cgroup_uncharge_swap'?
   include/linux/swap.h:658:20: note: 'mem_cgroup_uncharge_swap' declared here
     658 | static inline void mem_cgroup_uncharge_swap(swp_entry_t entry,
         |                    ^
>> mm/vswap.c:661:2: error: call to undeclared function 'mem_cgroup_record_swap'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
     661 |         mem_cgroup_record_swap(folio, entry);
         |         ^
   mm/vswap.c:661:2: note: did you mean 'mem_cgroup_uncharge_swap'?
   include/linux/swap.h:658:20: note: 'mem_cgroup_uncharge_swap' declared here
     658 | static inline void mem_cgroup_uncharge_swap(swp_entry_t entry,
         |                    ^
   2 errors generated.


vim +/mem_cgroup_clear_swap +637 mm/vswap.c

   528	
   529	/*
   530	 * Caller needs to handle races with other operations themselves.
   531	 *
   532	 * Specifically, this function is safe to be called in contexts where the swap
   533	 * entry has been added to the swap cache and the associated folio is locked.
   534	 * We cannot race with other accessors, and the swap entry is guaranteed to be
   535	 * valid the whole time (since swap cache implies one refcount).
   536	 *
   537	 * We cannot assume that the backends will be of the same type,
   538	 * contiguous, etc. We might have a large folio coalesced from subpages with
   539	 * mixed backend, which is only rectified when it is reclaimed.
   540	 */
   541	 static void release_backing(swp_entry_t entry, int nr)
   542	{
   543		struct vswap_cluster *cluster = NULL;
   544		struct swp_desc *desc;
   545		unsigned long flush_nr, phys_swap_start = 0, phys_swap_end = 0;
   546		unsigned long phys_swap_released = 0;
   547		unsigned int phys_swap_type = 0;
   548		bool need_flushing_phys_swap = false;
   549		swp_slot_t flush_slot;
   550		int i;
   551	
   552		VM_WARN_ON(!entry.val);
   553	
   554		rcu_read_lock();
   555		for (i = 0; i < nr; i++) {
   556			desc = vswap_iter(&cluster, entry.val + i);
   557			VM_WARN_ON(!desc);
   558	
   559			/*
   560			 * We batch contiguous physical swap slots for more efficient
   561			 * freeing.
   562			 */
   563			if (phys_swap_start != phys_swap_end &&
   564					(desc->type != VSWAP_SWAPFILE ||
   565						swp_slot_type(desc->slot) != phys_swap_type ||
   566						swp_slot_offset(desc->slot) != phys_swap_end)) {
   567				need_flushing_phys_swap = true;
   568				flush_slot = swp_slot(phys_swap_type, phys_swap_start);
   569				flush_nr = phys_swap_end - phys_swap_start;
   570				phys_swap_start = phys_swap_end = 0;
   571			}
   572	
   573			if (desc->type == VSWAP_ZSWAP && desc->zswap_entry) {
   574				zswap_entry_free(desc->zswap_entry);
   575			} else if (desc->type == VSWAP_SWAPFILE) {
   576				phys_swap_released++;
   577				if (!phys_swap_start) {
   578					/* start a new contiguous range of phys swap */
   579					phys_swap_start = swp_slot_offset(desc->slot);
   580					phys_swap_end = phys_swap_start + 1;
   581					phys_swap_type = swp_slot_type(desc->slot);
   582				} else {
   583					/* extend the current contiguous range of phys swap */
   584					phys_swap_end++;
   585				}
   586			}
   587	
   588			desc->slot.val = 0;
   589	
   590			if (need_flushing_phys_swap) {
   591				spin_unlock(&cluster->lock);
   592				cluster = NULL;
   593				swap_slot_free_nr(flush_slot, flush_nr);
   594				need_flushing_phys_swap = false;
   595			}
   596		}
   597		if (cluster)
   598			spin_unlock(&cluster->lock);
   599		rcu_read_unlock();
   600	
   601		/* Flush any remaining physical swap range */
   602		if (phys_swap_start) {
   603			flush_slot = swp_slot(phys_swap_type, phys_swap_start);
   604			flush_nr = phys_swap_end - phys_swap_start;
   605			swap_slot_free_nr(flush_slot, flush_nr);
   606		}
   607	
   608		if (phys_swap_released)
   609			mem_cgroup_uncharge_swap(entry, phys_swap_released);
   610	 }
   611	
   612	/*
   613	 * Entered with the cluster locked, but might unlock the cluster.
   614	 * This is because several operations, such as releasing physical swap slots
   615	 * (i.e swap_slot_free_nr()) require the cluster to be unlocked to avoid
   616	 * deadlocks.
   617	 *
   618	 * This is safe, because:
   619	 *
   620	 * 1. The swap entry to be freed has refcnt (swap count and swapcache pin)
   621	 *    down to 0, so no one can change its internal state
   622	 *
   623	 * 2. The swap entry to be freed still holds a refcnt to the cluster, keeping
   624	 *    the cluster itself valid.
   625	 *
   626	 * We will exit the function with the cluster re-locked.
   627	 */
   628	static void vswap_free(struct vswap_cluster *cluster, struct swp_desc *desc,
   629		swp_entry_t entry)
   630	{
   631		/* Clear shadow if present */
   632		if (xa_is_value(desc->shadow))
   633			desc->shadow = NULL;
   634		spin_unlock(&cluster->lock);
   635	
   636		release_backing(entry, 1);
 > 637		mem_cgroup_clear_swap(entry, 1);
   638	
   639		/* erase forward mapping and release the virtual slot for reallocation */
   640		spin_lock(&cluster->lock);
   641		release_vswap_slot(cluster, entry.val);
   642	}
   643	
   644	/**
   645	 * folio_alloc_swap - allocate virtual swap space for a folio.
   646	 * @folio: the folio.
   647	 *
   648	 * Return: 0, if the allocation succeeded, -ENOMEM, if the allocation failed.
   649	 */
   650	int folio_alloc_swap(struct folio *folio)
   651	{
   652		swp_entry_t entry;
   653	
   654		VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
   655		VM_BUG_ON_FOLIO(!folio_test_uptodate(folio), folio);
   656	
   657		entry = vswap_alloc(folio);
   658		if (!entry.val)
   659			return -ENOMEM;
   660	
 > 661		mem_cgroup_record_swap(folio, entry);
   662		swap_cache_add_folio(folio, entry, NULL);
   663	
   664		return 0;
   665	}
   666	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


  parent reply	other threads:[~2026-02-09  2:12 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-08 21:58 [PATCH v3 00/20] Virtual Swap Space Nhat Pham
2026-02-08 21:58 ` [PATCH v3 01/20] mm/swap: decouple swap cache from physical swap infrastructure Nhat Pham
2026-02-08 22:26   ` [PATCH v3 00/20] Virtual Swap Space Nhat Pham
2026-02-10 17:59     ` Kairui Song
2026-02-10 18:52       ` Johannes Weiner
2026-02-10 19:11       ` Nhat Pham
2026-02-10 19:23         ` Nhat Pham
2026-02-12  5:07         ` Chris Li
2026-02-17 23:36         ` Nhat Pham
2026-02-10 21:58       ` Chris Li
2026-02-20 21:05       ` [PATCH] vswap: fix poor batching behavior of vswap free path Nhat Pham
2026-02-08 22:31   ` [PATCH v3 00/20] Virtual Swap Space Nhat Pham
2026-02-09 12:20     ` Chris Li
2026-02-10  2:36       ` Johannes Weiner
2026-02-10 21:24         ` Chris Li
2026-02-10 23:01           ` Johannes Weiner
2026-02-10 18:00       ` Nhat Pham
2026-02-10 23:17         ` Chris Li
2026-02-08 22:39   ` Nhat Pham
2026-02-09  2:22   ` [PATCH v3 01/20] mm/swap: decouple swap cache from physical swap infrastructure kernel test robot
2026-02-08 21:58 ` [PATCH v3 02/20] swap: rearrange the swap header file Nhat Pham
2026-02-08 21:58 ` [PATCH v3 03/20] mm: swap: add an abstract API for locking out swapoff Nhat Pham
2026-02-08 21:58 ` [PATCH v3 04/20] zswap: add new helpers for zswap entry operations Nhat Pham
2026-02-08 21:58 ` [PATCH v3 05/20] mm/swap: add a new function to check if a swap entry is in swap cached Nhat Pham
2026-02-08 21:58 ` [PATCH v3 06/20] mm: swap: add a separate type for physical swap slots Nhat Pham
2026-02-08 21:58 ` [PATCH v3 07/20] mm: create scaffolds for the new virtual swap implementation Nhat Pham
2026-02-08 21:58 ` [PATCH v3 08/20] zswap: prepare zswap for swap virtualization Nhat Pham
2026-02-08 21:58 ` [PATCH v3 09/20] mm: swap: allocate a virtual swap slot for each swapped out page Nhat Pham
2026-02-09 17:12   ` kernel test robot
2026-02-11 13:42   ` kernel test robot
2026-02-08 21:58 ` [PATCH v3 10/20] swap: move swap cache to virtual swap descriptor Nhat Pham
2026-02-08 21:58 ` [PATCH v3 11/20] zswap: move zswap entry management to the " Nhat Pham
2026-02-08 21:58 ` [PATCH v3 12/20] swap: implement the swap_cgroup API using virtual swap Nhat Pham
2026-02-08 21:58 ` [PATCH v3 13/20] swap: manage swap entry lifecycle at the virtual swap layer Nhat Pham
2026-02-08 21:58 ` [PATCH v3 14/20] mm: swap: decouple virtual swap slot from backing store Nhat Pham
2026-02-10  6:31   ` Dan Carpenter
2026-02-08 21:58 ` [PATCH v3 15/20] zswap: do not start zswap shrinker if there is no physical swap slots Nhat Pham
2026-02-08 21:58 ` [PATCH v3 16/20] swap: do not unnecesarily pin readahead swap entries Nhat Pham
2026-02-08 21:58 ` [PATCH v3 17/20] swapfile: remove zeromap bitmap Nhat Pham
2026-02-08 21:58 ` [PATCH v3 18/20] memcg: swap: only charge physical swap slots Nhat Pham
2026-02-09  2:01   ` kernel test robot
2026-02-09  2:12   ` kernel test robot [this message]
2026-02-08 21:58 ` [PATCH v3 19/20] swap: simplify swapoff using virtual swap Nhat Pham
2026-02-08 21:58 ` [PATCH v3 20/20] swapfile: replace the swap map with bitmaps Nhat Pham
2026-02-08 22:51 ` [PATCH v3 00/20] Virtual Swap Space Nhat Pham
2026-02-12 12:23   ` David Hildenbrand (Arm)
2026-02-12 17:29     ` Nhat Pham
2026-02-12 17:39       ` Nhat Pham
2026-02-12 20:11         ` David Hildenbrand (Arm)
2026-02-12 17:41       ` David Hildenbrand (Arm)
2026-02-12 17:45         ` Nhat Pham
2026-02-10 15:45 ` [syzbot ci] " syzbot ci

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202602091006.0jXoavPW-lkp@intel.com \
    --to=lkp@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=bhe@redhat.com \
    --cc=cgroups@vger.kernel.org \
    --cc=chengming.zhou@linux.dev \
    --cc=chrisl@kernel.org \
    --cc=christophe.leroy@csgroup.eu \
    --cc=hannes@cmpxchg.org \
    --cc=huang.ying.caritas@gmail.com \
    --cc=hughd@google.com \
    --cc=joshua.hahnjy@gmail.com \
    --cc=kasong@tencent.com \
    --cc=kernel-team@meta.com \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=llvm@lists.linux.dev \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@kernel.org \
    --cc=muchun.song@linux.dev \
    --cc=nphamcs@gmail.com \
    --cc=oe-kbuild-all@lists.linux.dev \
    --cc=osalvador@suse.de \
    --cc=pavel@kernel.org \
    --cc=peterx@redhat.com \
    --cc=riel@surriel.com \
    --cc=roman.gushchin@linux.dev \
    --cc=ryan.roberts@arm.com \
    --cc=shakeel.butt@linux.dev \
    --cc=shikemeng@huaweicloud.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=yosry.ahmed@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox