From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47FEDC7EE31 for ; Fri, 27 Jun 2025 06:23:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D47EC6B007B; Fri, 27 Jun 2025 02:23:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF8E16B0098; Fri, 27 Jun 2025 02:23:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B99DC6B009C; Fri, 27 Jun 2025 02:23:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A5F016B007B for ; Fri, 27 Jun 2025 02:23:00 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 7187010496C for ; Fri, 27 Jun 2025 06:23:00 +0000 (UTC) X-FDA: 83600187720.18.E5FCC4A Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf12.hostedemail.com (Postfix) with ESMTP id 7F12E40006 for ; Fri, 27 Jun 2025 06:22:58 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=HnG7ykF3; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf12.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751005378; a=rsa-sha256; cv=none; b=VXAXKmnK/RqBkGeMo1oSBK5dloRvSLIe2Fr+CRwPjn+fbcG0fC4Brxb2e3JTbYryLVUIqp 63KoXfEJHCzxbHp3qosWTzjukKyVAfJRGF8skNowy6p04iW5y4sHTBzp2tdNBBqmdLeUXl iXGX9Tf4lIlQNJ3vhRow/CZuID3AerU= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=HnG7ykF3; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf12.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751005378; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=36NUzmy7cI1mrCn85qnONqa0stObJhxRD+bOILE9V6E=; b=SX57eqpPB92icDd7TgsWwvEd+3/xq5NeYUGuWkoHT4Ktdp2qE2YbqGb3S1Eo/lZoQLIqMe GqYDlqaKBBDt1X7XiwwySpesPs4NPeYlzrTVrBBQhpDzX81K3HmaK1GT9DhSZKePnBL6Ap 52NwO6S1D/3sSMkMncbexP9t3W05j6o= Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-748e378ba4fso2381353b3a.1 for ; Thu, 26 Jun 2025 23:22:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1751005376; x=1751610176; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=36NUzmy7cI1mrCn85qnONqa0stObJhxRD+bOILE9V6E=; b=HnG7ykF3aK+hdA09k6Bhwm9bhzkvZ8pZ8k10Ktu43pa8/z/yk1X9ccpihKtO5vXMnW e6Yh76QFzB12gPhxfhglPI/FA8KTZtpbT8ALXPf7Qe2CmcOukWgGsc/d74DPjfnorpm8 RDxD1u3+DmlMjkR6W+cLo6eG9vxBmvue7zOGr9UFwb1rHOEoymxtUXb8ianCpGWJDKpd pzQsL/pHxDAXyqYfT9EtJj6bqwhBnpFiYWLArljp8TkmIUpCzTqkDQDz7y8qxWMnxt0v P5AknEAGyo5w/cbemHnRURNvSlvRUbDPm2RxHUhBQg8MSuMSxu4Jy2jArPkpMBYhCG9y rOeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751005376; x=1751610176; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=36NUzmy7cI1mrCn85qnONqa0stObJhxRD+bOILE9V6E=; b=pA1UYiAu5pCQrxV/gc2KkuJYBv064Dip0LpZnsPh6vjenLmY/uV1y7MXIlLONIRIIg LUbuYjoDZ+BTQrAk+mN+/GMMGnlEZ/b2AHabw/RTKEzQykcciNPVcuW2QGwxv57hmDx2 BZ09mZd9HE+IJ1JCSLlEs808VN+XGBf+T2z0WcNcKdgHKcaow7Qx6ln30s9YnM/CbznB 9ZE78kkAUGgHPx95LyuuIjNviyWAl40EZ79upj81gK3cKPp2kSQNt37hkc5h3PmNO7ls GxHJgxn5PEe0cQwrk0YhvtzQecXTkDdPKHo7T7hbNbOauNNGsQrnp75L0dfrVCQa4nCm YF8g== X-Gm-Message-State: AOJu0YzC0sJg9PCQk06MTHOC/2+C/EXBtHX4awMMGvD+tmy0tcgbF9Kj K1LYS5zBJWIRKYWg9ISP0hXxcgQVTHte2Bsu5aRAiYiuawrYQbCvIZ6c7u7e1zIn+SI= X-Gm-Gg: ASbGncvx3utRTB7SNcENU12zccIzlVLXVyAtUOiBlqjKVNhPIhY5xIwGrTSL2QVSNJj h3URJyZcQFaROIv6DK0GZhEuogTt4pxHDm4DdlogOdBbZjGK40qKoqrN9K8YDJi4ViuB1ciUXHg OiNuf9OAWJL/FfFSC6WzNAS4diix0nSsVKFNa6lfG+ZPHu6qm5qM0WwP4iKmtFR0gXRNNMCSACj R3MONQiej9oKc8Y5AU8clTUl1Q39UH5WL5mnOvy8Nm809iuS9agqSAIPUwNnVO/lvJRVDywY34o yaA28LwpBtKTNP+mGR4quLaLZcOloN2wn//x+3KMv70awlKjMlyhFlask3Jd7+WtZl3TqggOAKW 3 X-Google-Smtp-Source: AGHT+IHGmm3EE0oN1bu4Rse5K8kjBmPx7EfFiguYLGrRvqrW0keBD5PfxxWPsRTGL/+Upm5cemHgNQ== X-Received: by 2002:a05:6a00:2411:b0:746:26fe:8cdf with SMTP id d2e1a72fcca58-74af6e84939mr2996548b3a.7.1751005376305; Thu, 26 Jun 2025 23:22:56 -0700 (PDT) Received: from KASONG-MC4 ([43.132.141.21]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74af5409cb6sm1456212b3a.23.2025.06.26.23.22.52 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 26 Jun 2025 23:22:55 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Hugh Dickins , Baolin Wang , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v3 4/7] mm/shmem, swap: clean up swap entry splitting Date: Fri, 27 Jun 2025 14:20:17 +0800 Message-ID: <20250627062020.534-5-ryncsn@gmail.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250627062020.534-1-ryncsn@gmail.com> References: <20250627062020.534-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Queue-Id: 7F12E40006 X-Rspamd-Server: rspam10 X-Stat-Signature: 6c3oiaayuow7hottc6i8gehwqcn3b7wq X-HE-Tag: 1751005378-882100 X-HE-Meta: U2FsdGVkX1/y3tfncEoV1V8khbEMvM80H+Z5uEoFFh49ufbhNdExLxq/nQ/qa8UfrwDufo3yrjuhOF/tUA2r6t0x1GUy/FiEG/wbL84+X4QohDLyAEk4+N1v7qH5so04oiW1qQEujhkJktdquA+o6mQWM3PIS0JgzjAa3Rhy864Fs8oYItwAk1L/oyr1/iGZcrgGOXASIryspLBakftjq0faCkqE0w5bVXptl5plxe071S4zETOVMlFgLBd38jazZqK326h2K4DW10rvTlpS1mBEuKQHd3gWqbHp0XavBcRE/ERKp89xxalKZf66TAuCYW3Y/6k/cO53P8hjvFHVlTEVGhdnOBkMSWqi4+2gzvynx49g/TQa40KlHfYERc9brnAbiYj/t4u2wKj/OHjRDBJ1v3DlQt/0UZYvm8dZj41Bw9Q20DYPXmdKoV047FeU3i8MI3vZ7r8lqzUXzXXeO8BQ9hC3nsxI1ginHEusO4jtCxJsNi26Mqs3u1SSvNLdoke4g7GJoLViUPOZMEK9GUWzZsNmFg2g6P9BH7qGUx/Mp/C2Tih2r5FoGdoIbI3eFWepQ3Qx8upAE5rhH3RG/mCGYYH7NidRPC8Mmn488Dg8vEfU2KlqSHV58lCsylBXlJXMSk56daTGPP94gOxfKjJZI3ts3N1bHk1/mW9zxjqR8zKNaH2ymiBeSm/k/HuU6Q5epcH5wQICmMu3S3foRU3T4E2tEVrxQ51FzkeAUOTeqWD9gIp7Sr28amomMQjP20d5AHztLI5ixLV/cgYhwf2VnE245IGXN+udUWkr52gHRE9Y0fFQn4ygf/Mh040Ztcdgb/jvQkrhrsScn8FbQCUpxq4va6p6NIx3mABpYdsBYofMB+ggdJgSeY/7mmlDEgkb4y5YPE8LXuLho8rpxd6GShkCi0H5EL0R6ekSks/Mo51rKrypEy46dyik8IxCMM9Zb8xNvKKXbmYM1hO 4TfJIlSJ 5fk5DBG3C4ZnOu96GzoVEQtQBMSEg8F6/7olRoeMTdWyi3j2gFx9NqSvLMNrNwtAissI+1Y+spMMMAELKWSUE+i5RMrZFdvep9oSzQPZd+obzadJ9mb0ecGnVU4ua12E8Fp42kBV53k30CtGj6fgeaguW3EcyWCWjLJg5wFpNvmH5pmcImKN9+pUHkuOidmbgRP3L+YrmWgWm6Aqv32zHsNV/MKSnOXnZCH8F7pc3KduXkrofxNFMVLZ1fL6I5RbB55P+6DVXrqriKUk6Oer3BXQbvz3FkJYeT0kHODWvctfmMET2rDR4O9dvGNKpyoWYpqiddMCct68OyrbDP0ol6vMoDahe18QyroO5iaBCgONQviRzueSx8y2BXGx8dt8VAx8UXCIfwId6oUtWbgVs0d2cS6nANXvCgiNi9UcxpL6AwPwwz3wagWWKL67AIL69SVbgSeXLSTMIccPn5AEgMEqg1BYHRn9v/SStzjmClyvHpAGQ6bzEZHRyf+s7hbPL0m6MlbPz+h5UH74M6cTGl6vVTQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Instead of keeping different paths of splitting the entry and recalculating the swap entry and index, do it in one place. Whenever swapin brought in a folio smaller than the entry, split the entry. And always recalculate the entry and index, in case it might read in a folio that's larger than the entry order. This removes duplicated code and function calls, and makes the code more robust. Signed-off-by: Kairui Song --- mm/shmem.c | 103 +++++++++++++++++++++-------------------------------- 1 file changed, 41 insertions(+), 62 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index f85a985167c5..5be9c905396e 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2178,8 +2178,12 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index, swap_free_nr(swap, nr_pages); } -static int shmem_split_large_entry(struct inode *inode, pgoff_t index, - swp_entry_t swap, gfp_t gfp) +/* + * Split an existing large swap entry. @index should point to one sub mapping + * slot within the entry @swap, this sub slot will be split into order 0. + */ +static int shmem_split_swap_entry(struct inode *inode, pgoff_t index, + swp_entry_t swap, gfp_t gfp) { struct address_space *mapping = inode->i_mapping; XA_STATE_ORDER(xas, &mapping->i_pages, index, 0); @@ -2250,7 +2254,7 @@ static int shmem_split_large_entry(struct inode *inode, pgoff_t index, if (xas_error(&xas)) return xas_error(&xas); - return entry_order; + return 0; } /* @@ -2267,11 +2271,11 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, struct address_space *mapping = inode->i_mapping; struct mm_struct *fault_mm = vma ? vma->vm_mm : NULL; struct shmem_inode_info *info = SHMEM_I(inode); + int error, nr_pages, order, swap_order; struct swap_info_struct *si; struct folio *folio = NULL; bool skip_swapcache = false; swp_entry_t swap; - int error, nr_pages, order, split_order; VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); swap = radix_to_swp_entry(*foliop); @@ -2321,70 +2325,43 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, goto failed; } - /* - * Now swap device can only swap in order 0 folio, then we - * should split the large swap entry stored in the pagecache - * if necessary. - */ - split_order = shmem_split_large_entry(inode, index, swap, gfp); - if (split_order < 0) { - error = split_order; - goto failed; - } - - /* - * If the large swap entry has already been split, it is - * necessary to recalculate the new swap entry based on - * the old order alignment. - */ - if (split_order > 0) { - pgoff_t offset = index - round_down(index, 1 << split_order); - - swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); - } - /* Here we actually start the io */ folio = shmem_swapin_cluster(swap, gfp, info, index); if (!folio) { error = -ENOMEM; goto failed; } - } else if (order > folio_order(folio)) { - /* - * Swap readahead may swap in order 0 folios into swapcache - * asynchronously, while the shmem mapping can still stores - * large swap entries. In such cases, we should split the - * large swap entry to prevent possible data corruption. - */ - split_order = shmem_split_large_entry(inode, index, swap, gfp); - if (split_order < 0) { - folio_put(folio); - folio = NULL; - error = split_order; - goto failed; - } - - /* - * If the large swap entry has already been split, it is - * necessary to recalculate the new swap entry based on - * the old order alignment. - */ - if (split_order > 0) { - pgoff_t offset = index - round_down(index, 1 << split_order); - - swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); - } - } else if (order < folio_order(folio)) { - swap.val = round_down(swap.val, 1 << folio_order(folio)); } alloced: + /* + * We need to split an existing large entry if swapin brought in a + * smaller folio due to various of reasons. + * + * And worth noting there is a special case: if there is a smaller + * cached folio that covers @swap, but not @index (it only covers + * first few sub entries of the large entry, but @index points to + * later parts), the swap cache lookup will still see this folio, + * And we need to split the large entry here. Later checks will fail, + * as it can't satisfy the swap requirement, and we will retry + * the swapin from beginning. + */ + swap_order = folio_order(folio); + if (order > swap_order) { + error = shmem_split_swap_entry(inode, index, swap, gfp); + if (error) + goto failed_nolock; + } + + index = round_down(index, 1 << swap_order); + swap.val = round_down(swap.val, 1 << swap_order); + /* We have to do this with folio locked to prevent races */ folio_lock(folio); if ((!skip_swapcache && !folio_test_swapcache(folio)) || folio->swap.val != swap.val) { error = -EEXIST; - goto unlock; + goto failed_unlock; } if (!folio_test_uptodate(folio)) { error = -EIO; @@ -2405,8 +2382,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, goto failed; } - error = shmem_add_to_page_cache(folio, mapping, - round_down(index, nr_pages), + error = shmem_add_to_page_cache(folio, mapping, index, swp_to_radix_entry(swap), gfp); if (error) goto failed; @@ -2417,8 +2393,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, folio_mark_accessed(folio); if (skip_swapcache) { + swapcache_clear(si, folio->swap, folio_nr_pages(folio)); folio->swap.val = 0; - swapcache_clear(si, swap, nr_pages); } else { delete_from_swap_cache(folio); } @@ -2434,13 +2410,16 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, if (error == -EIO) shmem_set_folio_swapin_error(inode, index, folio, swap, skip_swapcache); -unlock: - if (skip_swapcache) - swapcache_clear(si, swap, folio_nr_pages(folio)); - if (folio) { +failed_unlock: + if (folio) folio_unlock(folio); - folio_put(folio); +failed_nolock: + if (skip_swapcache) { + swapcache_clear(si, folio->swap, folio_nr_pages(folio)); + folio->swap.val = 0; } + if (folio) + folio_put(folio); put_swap_device(si); return error; } -- 2.50.0