From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAB76C48291 for ; Mon, 5 Feb 2024 09:52:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 724526B0082; Mon, 5 Feb 2024 04:52:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D4776B0083; Mon, 5 Feb 2024 04:52:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 59C1E6B0085; Mon, 5 Feb 2024 04:52:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 4AA2E6B0082 for ; Mon, 5 Feb 2024 04:52:14 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D9EF716082C for ; Mon, 5 Feb 2024 09:52:13 +0000 (UTC) X-FDA: 81757284546.06.96068F5 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf01.hostedemail.com (Postfix) with ESMTP id F293240007 for ; Mon, 5 Feb 2024 09:52:11 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ijAAt9SJ; spf=pass (imf01.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707126732; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gVg9hskhux7PEjGeMXb+s3s4PVsJI8bFdR9MvoAyEws=; b=V8kPQ78dMQfPb3WT5+ZaqOFZlb6IOGEyl3vmt0hl5//FKrRokzxYT+XEgJ9MDPU0kPOGAa rrbgHmJjDAmkH+KAY9GnmbtATDQ61N670GmpQT++BOWErFRjLC8hBtxrIde8lxaljlUNPK GvNymYR4KyrDI6xlLPbJHorYwresQDg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707126732; a=rsa-sha256; cv=none; b=ERl3riVQY4PAkY1/hIhSEscDY6mLj6K8uKampyohv697LJpCgkbTc83UMtxNg4/QaQWvXj EcsZVoO4GeWXqDldJ1fTfT2jP9jr2ryUx9waPX6hxGA3+cDStNMaRbkSRDGvt/uIT/p6aS 5NAfQw70NwvZCigN4jWQUVa4fCkcVLo= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ijAAt9SJ; spf=pass (imf01.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-1d98fc5ebceso4647045ad.1 for ; Mon, 05 Feb 2024 01:52:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1707126731; x=1707731531; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gVg9hskhux7PEjGeMXb+s3s4PVsJI8bFdR9MvoAyEws=; b=ijAAt9SJkMMNYPYsrTUQrNVdyUVXSa6ECAhuUJaiaK498nYysF43IMr+vX42cEA9oK +pWPo27P8R3t3jr9W6csIPLuY2L+0QYuEEnBRiVaAShEdVutJqaDR4KuuN6bpsc5XcrQ XpY9TFUR57VviZwvRqhlyFuacYM5aqPT+PZAp3nBLy6Qzy5TQSaWpuiHm0uiYdDb6lLP 1aHhYdX5AlCRfao8WyQ5v3BbWoVMIEWYxAopunZUkEJhkSPeaLp/VEG8o+Gy0NwyPB6h 91EQrDDy6SH8OxmBCAw9vuEPBKSeURJRUP0aIzgY59eQ8sEosmncoi6wmvGULZ8fIoFS j7BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707126731; x=1707731531; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gVg9hskhux7PEjGeMXb+s3s4PVsJI8bFdR9MvoAyEws=; b=RjUidFU+yEXtf5nA3p5pCNQThJyogAfuBMPat11OnGc6SR+Wj3ZO8NldQ2XmRzSe8G h17d53YJ5s2MW9/s2syyC72blfaZOy0BAO06qrtO7YOBwvu8Et6xYcPSTUsktGPeDqlx uQcMG+opkYq3wLaUO2nGLfeS3Sb2CpJMQKRABTpJQ+Vh+PfUPCLggVS7K2is9UnSvjvj hgYOq8bxKMmT6bfO/JM+L1Wm/BJKs7wm6aAeiObbr9i2ZauRl1iJp1VLaYW/e+YtP3/e 8n6hhjuaDacq0bFNlLRMycqYcdNwbOCyAGk/VsGs3EDB5Gyav6R3rzQJKig3upuKeGb/ e3ow== X-Gm-Message-State: AOJu0YzKPmbV0QV9RwhJPlaYiYK2z1fNd6KKdT99KBL7R2kMwnLcTwqw Qr0WG4mZUzUnSWp0o7hsigX6wjW2tiLf9jDSVOyyh3H6bm/tVLif X-Google-Smtp-Source: AGHT+IF0KYsn1e75XuK3ZzHWX/bzYsqDUqLsDtTP/Itf8VwqKIuYJcfA99LoYdl9RtpqpspN9sBA9w== X-Received: by 2002:a17:903:2344:b0:1d9:9735:ed60 with SMTP id c4-20020a170903234400b001d99735ed60mr5337515plh.34.1707126730684; Mon, 05 Feb 2024 01:52:10 -0800 (PST) X-Forwarded-Encrypted: i=0; AJvYcCU7s/j64exoklKubMSmYD/pGjCnwVYO5sbhY1dbDCzGqPgO5JzLrAOLEM/7+DaudPdvXDvg9+DFFqTMjo2a39H0gnn0WXBwfOz2GpuJWW9WJHUHmiwzpbvprLMiILWwdFjZChynunH3w6HC0ZFta0Ew/pHrcEwWHvl9AE4Wu0oTBZNpYZc6IZ2h+sXELotkq2ZJw3SKX8j0qKFu7+FHC7qtN8v0CB1b4lXfM+HnmgOkpaPsswqAFmCMbVo8IgMXQoFneqRi9LUV4P8kp0W15e9SKRmLcAIzwX2zzKEgpHwOrkb1lubNcAQ671JsqOJzZ7Y47EP3/2nSP7yuCj9ntLMd5P6GEfJxon+p1f3gV+VnW7e0+QL7TJHPCi++sxVi05959HA7d5jDmraCCNAweNk7K7qNMZfdT8ji6JHx1uzG+w== Received: from barry-desktop.hub ([2407:7000:8942:5500:3b9c:8b01:805:3197]) by smtp.gmail.com with ESMTPSA id w18-20020a170902d11200b001d8ee19ab34sm5970376plw.29.2024.02.05.01.52.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 01:52:10 -0800 (PST) From: Barry Song <21cnbao@gmail.com> X-Google-Original-From: Barry Song To: ryan.roberts@arm.com Cc: akpm@linux-foundation.org, david@redhat.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhocko@suse.com, shy828301@gmail.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com, chrisl@kernel.org, surenb@google.com, hanchuanhua@oppo.com Subject: Re: [PATCH v3 4/4] mm: swap: Swap-out small-sized THP without splitting Date: Mon, 5 Feb 2024 22:51:55 +1300 Message-Id: <20240205095155.7151-1-v-songbaohua@oppo.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231025144546.577640-5-ryan.roberts@arm.com> References: <20231025144546.577640-5-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: F293240007 X-Rspam-User: X-Stat-Signature: d73jx8x7odjmkpn77hfzx7uwkns4cjaf X-Rspamd-Server: rspam03 X-HE-Tag: 1707126731-172386 X-HE-Meta: U2FsdGVkX1+VyLQ3HA5jEkCZc4ioYqfduLfqEMSJ/1I8cpgxRApNoVEU6s2vY0sIPcgHt7yKklafINft5bcu/PAOIW9OanT1Do6XRUaX7sZmQo38k6+u5nePHeoZtDNLL+RLHaSLyXuGyePBD4pUl9W9+KPQj8dVdQJ/Uz5Zgphx/2JyisMemYFqJfQ3Ugbos9FJdciTVha+RFvnKCAOpXu6RZVyfgwCXbykpxtySbkYFzJZRLtm1IUF6SH3EPyK35CzuX+yB+5gY8z9jku1oj8NFsKx14dF84LOqJ7MJShhNWyjjRZ3SJ0QgSW/Ibv21E7CmagaDo2SisMsDHbh+BfUp2RMTK4rpiTvULiNaYXNLIpuyhImeS16b/QgrODE5i5dC8rBeBbDoRQeaZqwBr1igzaxmTHMC4zNY/gURfeGzIBamJ4e0GJML/J53TL5oJT0F/FM/2vQjNGZ4gudymxoTIswkj9jS3N9A4zPBiYFHxLaCTl1+u+LJkdZDRq7nCrBEm7hkRPZcYaKGcorR/D4KHFfXqnyRNMBHrnBD62duI3RJ1IJDiT/dGylwO/TasdR8i9TSlTc8zCzQ8MQ1Q2Zaj8oyx+SY2xYmQdNCYAp4PkZFmwiGPKJ0hm88GXNI+lMSvyUZxYfdKp3ZTCI4/kkuopOGdmBv30/kdwkO0hu5lxJdfRX5FnTwVkJ1h2kywx25JPz1NTPgZPQmtJaSmxGSBpKR75DgFLB198m1jnHHYXPFeLVOGMUSLhGy4G3tNCK0JD6ezBYL2SLZa9I8GEdDBf0i0oaX8NiWIUN+bJjdaEVgRqkdf8SzUvFjn4xhGS9VhAITExlm1wwowPyMYDD/y+aoF/TY4Gha9qp5n+Huf9lmQjf+Y+yg6LM7pK7yNigwV0iEMbY/4T6KIVJPC/H3Vc29okEq0fhhI/dKqtKiCZF1XPQQ0ZlOfc6havH/LNZkNE0YHUInQs84+K GJCog9P/ 4AakklqPAFt4h9TXlU8+DTeMF3aqoXs7qnq3Cb20Of3zPwIwzzXKyY9FHBa2MgO1by8RICOu7GeWgGx4ixSbVqgAaDBvI3Nu3vT0W6Q6XLvGBFzmN5IyT/JXicctpRdi0MufDz+YVm7flA+++2FFlBxkfXZ2qCpyxuySgYbO1aN/YnvEOJaNULOs80GORpnk7Bi3+B4f+SBzqswoFV3rgPPIi00FmHcTSLeutStfBcVx9UTDgm/lgHKNwNA7+Cx1+9x9lLY51RanTgJ3NNff8ZDtuR6Qx2Fwf8RDHGxrxMspe5Y7Rjp8LdJL0+c9rITROfeh0/1DmrjsoqYsUeKcWZw3IU6P+0Z49tbcQT35PukY6XYlkZsgrCLWcoUSkfilBCwmeAtZhx71eQnK6w8sFH7Von7CBcUtZxWPmak9BxBuTLTaZA2HXVapkerkfZU9g5RczdLtD679mqnePexK3YFNKvzgNr/Ak4d/HcD5J+CaM9WXrxN7kbL9yE5jTpv+PQHh0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: +Chris, Suren and Chuanhua Hi Ryan, > + /* > + * __scan_swap_map_try_ssd_cluster() may drop si->lock during discard, > + * so indicate that we are scanning to synchronise with swapoff. > + */ > + si->flags += SWP_SCANNING; > + ret = __scan_swap_map_try_ssd_cluster(si, &offset, &scan_base, order); > + si->flags -= SWP_SCANNING; nobody is using this scan_base afterwards. it seems a bit weird to pass a pointer. > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1212,11 +1212,13 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > if (!can_split_folio(folio, NULL)) > goto activate_locked; > /* > - * Split folios without a PMD map right > - * away. Chances are some or all of the > - * tail pages can be freed without IO. > + * Split PMD-mappable folios without a > + * PMD map right away. Chances are some > + * or all of the tail pages can be freed > + * without IO. > */ > - if (!folio_entire_mapcount(folio) && > + if (folio_test_pmd_mappable(folio) && > + !folio_entire_mapcount(folio) && > split_folio_to_list(folio, > folio_list)) > goto activate_locked; > -- Chuanhua and I ran this patchset for a couple of days and found a race between reclamation and split_folio. this might cause applications get wrong data 0 while swapping-in. in case one thread(T1) is reclaiming a large folio by some means, still another thread is calling madvise MADV_PGOUT(T2). and at the same time, we have two threads T3 and T4 to swap-in in parallel. T1 doesn't split and T2 does split as below, static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) { /* * Creating a THP page is expensive so split it only if we * are sure it's worth. Split it if we are only owner. */ if (folio_test_large(folio)) { int err; if (folio_estimated_sharers(folio) != 1) break; if (pageout_anon_only_filter && !folio_test_anon(folio)) break; if (!folio_trylock(folio)) break; folio_get(folio); arch_leave_lazy_mmu_mode(); pte_unmap_unlock(start_pte, ptl); start_pte = NULL; err = split_folio(folio); folio_unlock(folio); folio_put(folio); if (err) break; start_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl); if (!start_pte) break; arch_enter_lazy_mmu_mode(); pte--; addr -= PAGE_SIZE; continue; } return 0; } if T3 and T4 swap-in same page, and they both do swap_read_folio(). the first one of T3 and T4 who gets PTL will set pte, and the 2nd one will check pte_same() and find pte has been changed by another thread, thus goto out_nomap in do_swap_page. vm_fault_t do_swap_page(struct vm_fault *vmf) { if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { /* skip swapcache */ folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vmf->address, false); page = &folio->page; if (folio) { __folio_set_locked(folio); __folio_set_swapbacked(folio); /* To provide entry to swap_read_folio() */ folio->swap = entry; swap_read_folio(folio, true, NULL); folio->private = NULL; } } else { } /* * Back out if somebody else already faulted in this pte. */ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) goto out_nomap; swap_free(entry); pte = mk_pte(page, vma->vm_page_prot); set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); return ret; } while T1 and T2 is working in parallel, T2 will split folio. this can run into race with T1's reclamation without splitting. T2 will split a large folio into a couple of normal pages and reclaim them. If T3 finishes swap_read_folio and gets PTL earlier than T4, it calls set_pte and swap_free. this will cause zRAM to free the slot. then t4 will get zero data in swap_read_folio() as the below zRAM code will fill zero for freed slots, static int zram_read_from_zspool(struct zram *zram, struct page *page, u32 index) { ... handle = zram_get_handle(zram, index); if (!handle || zram_test_flag(zram, index, ZRAM_SAME)) { unsigned long value; void *mem; value = handle ? zram_get_element(zram, index) : 0; mem = kmap_local_page(page); zram_fill_page(mem, PAGE_SIZE, value); kunmap_local(mem); return 0; } } usually, after t3 frees swap and does set_pte, t4's pte_same becomes false, it won't set pte again. So filling zero data into freed slot by zRAM driver is not a problem at all. but the race is that T1 and T2 might do set swap to ptes twice as t1 doesn't split but t2 splits (splitted normal folios are also added into reclaim_list), thus, the corrupted zero data will get a chance to be set into PTE by t4 as t4 reads the new PTE which is set secondly and has the same swap entry as its orig_pte after T3 has swapped-in and free the swap entry. we have worked around this problem by preventing T4 from splitting large folios and letting it goto skip the large folios entirely in MADV PAGEOUT once we detect a concurrent reclamation for this large folio. so my understanding is changing vmscan isn't sufficient to support large folio swap-out without splitting. we have to adjust madvise as well. we will have a fix for this problem in [PATCH RFC 6/6] mm: madvise: don't split mTHP for MADV_PAGEOUT https://lore.kernel.org/linux-mm/20240118111036.72641-7-21cnbao@gmail.com/ But i feel this patch should be a part of your swap-out patchset rather than the swap-in series of Chuanhua and me :-) Thanks Barry