From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFE43C5321E for ; Sun, 25 Aug 2024 22:31:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1FF2C8D0033; Sun, 25 Aug 2024 18:31:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 188408D001C; Sun, 25 Aug 2024 18:31:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 001FE8D0033; Sun, 25 Aug 2024 18:31:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D34878D001C for ; Sun, 25 Aug 2024 18:31:27 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 363F9C07A4 for ; Sun, 25 Aug 2024 22:31:27 +0000 (UTC) X-FDA: 82492215414.17.6C20F3D Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf19.hostedemail.com (Postfix) with ESMTP id 5A76B1A0004 for ; Sun, 25 Aug 2024 22:31:25 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Aknknr0p; spf=pass (imf19.hostedemail.com: domain of hughd@google.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724625001; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ECFxXZzfg6EPp6aDhVxPSP7HkNB1gx8gs++eUuEcwi8=; b=D4z9Q6A6epmodZgrQ1bA7hFziJtpQ4wZyJI07LwwqpmV9LnoK28G3XH3/M5v/ByO4WFmoc ijNt5H6a2ECX5XBogi7TuWsgMeA5JiHWjhT7HPsmIkQ1Gb6XNY90Z5/BOMnQbRs35qauId 4m1R63eOfaknM2Ptq/ouWhb0fvLkZJk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724625001; a=rsa-sha256; cv=none; b=qFuakgQkY4xKEiFoRJwRSRHle8LDgPFJH9xZ5IQJfZdbaNWK8fDYvUHBBwNh+gX4YnAKK4 COq+HmupZARI4rCwGmbYz3o4ykNOItRh8ql4PdC9uyQLv4MebSKFRBvfe9J2Fz8wPTFHcy A5u3WXKpn0V22Wh2z4Z2+nYvj9c2Pys= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Aknknr0p; spf=pass (imf19.hostedemail.com: domain of hughd@google.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f43.google.com with SMTP id 98e67ed59e1d1-2d3c396787cso3060184a91.1 for ; Sun, 25 Aug 2024 15:31:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724625084; x=1725229884; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=ECFxXZzfg6EPp6aDhVxPSP7HkNB1gx8gs++eUuEcwi8=; b=Aknknr0pw9XAe7AfK3OCk7umMXiQWkHbhyuyHPsyRr6k+fMhjQLrOKHY1fAH9qZ0hI 9r1JV1VnAOgzXI2E0BuUIxs/ILQFMbSLyjmFK8TE9qUxVMZJ05ELKYz+fECNLp5sPa1c CXzBhQqASN2DLd2AKCOSDNHpACLmOuFIMhlwvqBXEr4/DWGGACzFDjljRDzP2tyOyoOf gHl3xLHoqfmu2QPiz6Oh+W6Agut8++Ud2Mm83gzsgA3nfOjxNht+A/mdq/O6vEtVZ8AR EM9dK1Yn6XAwaapIuu9oj7omEzmdn1EqRCEy5CYsgDRjD/ufRhzAcaM3IX4Fm0Bz0xVc 5xRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724625084; x=1725229884; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ECFxXZzfg6EPp6aDhVxPSP7HkNB1gx8gs++eUuEcwi8=; b=M4A/Fddu4VLVDCe8NzOMtx93sXhLf8ITodvgQWodArfi6C7DSy+p7KxQwJvHS77Vxp iG0RLujRDyOY2IXPt7HWl3CWlQ3m+ix4OxOA+kZvNRpE8e+ShbjUQxh4EHhO5x/R7pgl T2KmKy7gv1+pKkRgHQQ4GdhyYUBRwh8dMioIta+SLrN2b1NOprP+rOGcee+Q5HkEc6/+ 5MCKbo4io+msRP+OvHG3wtuJqjkdgWR6pDiHrRz4ERRr4wtP21pzLISTxxXriwZawJ2p d9AkGH/nLWB9JXwEZfnyggb+itNGXdkbSwVlrVQfzZBfPB4O6I9nlHY2vYoD7UTgKzQW Rhrg== X-Forwarded-Encrypted: i=1; AJvYcCWtJ5oCMAwrFaXzb0aaWxAQIzwxK9D2Df7TpW7OkHLMicCWDJ/5njHToON/wcYAFNiCOLG8brA0/g==@kvack.org X-Gm-Message-State: AOJu0YwEkVxr9UslCQA+ixQoS9/EAZv9pLuvAy6XhOiPVlK1rut4+EEs O7g5S0KOOD4kmWksgJqa0kVbZKAVER0/jzBZoAXBBraR3vto+oPaHH3THWW3NA== X-Google-Smtp-Source: AGHT+IHq68YcCaPHf9f3PRa5veAqAEtfR93oiNTqOsLsh1yukVHogc8EsDEc1R6HvEXe/KboacvKwQ== X-Received: by 2002:a17:90a:e393:b0:2d3:b1d1:b62d with SMTP id 98e67ed59e1d1-2d646c232e6mr9907105a91.24.1724625083723; Sun, 25 Aug 2024 15:31:23 -0700 (PDT) Received: from darker.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2d61392009bsm8477539a91.16.2024.08.25.15.31.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Aug 2024 15:31:22 -0700 (PDT) Date: Sun, 25 Aug 2024 15:31:21 -0700 (PDT) From: Hugh Dickins To: Andrew Morton , Baolin Wang cc: hughd@google.com, willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, chrisl@kernel.org, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 8/9] mm: shmem: split large entry if the swapin folio is not large In-Reply-To: <4a0f12f27c54a62eb4d9ca1265fed3a62531a63e.1723434324.git.baolin.wang@linux.alibaba.com> Message-ID: References: <4a0f12f27c54a62eb4d9ca1265fed3a62531a63e.1723434324.git.baolin.wang@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: 5A76B1A0004 X-Stat-Signature: f54qw6e3agkbd6wnhb676gd7h3yxnzs1 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1724625085-846596 X-HE-Meta: U2FsdGVkX1+0Lll4/V9NZBmVm5tSY5MBw05lwMTEK4FIYh3PWB037mi32fBdo3yOR9hEfjiG4DDWSY5mZSdpGaE0r53DzlO/FN/7rE5Tf3jwPa/xqb9Es1AojfjCornRlN4wSlbXeui2TN2cDkdhvZxHUuDEI3ODZCNVMWlPjvhBjxwp2kKslxmrr5gNdrr/4jR0xoGL2I5oFNSfiPT7OUSdY7nQaBtRMZKsK9IpSNiJqX1SeUr97ebmqrAtfeiIB1G6ocWzmbXVSQItFcSLMfl7LoG58G+nieMFglKihUABKRXpUZC+S1UY0px/uNtf+yNmmDeXSa/6yJia3H0oL1oG80V/XGJWV3lGnZo1x2uw0Ug3FNTMraBX1M1IAlEAomewdqPv68CA7uHjmrptyzYILoWmJRD4/MeDjLcZYdfJRyadq4fqLEoOAm8Vk5pj2F22ptDf1yBWJyFI4xiA1vMj24aNBVocyN1IR7ddnyhiFh07MnfcVCpAH8IKhY8+ZVzW+wiuwMSfdriDAaf+A9eHqR5cE05oqs0vkf2TXswrxzsX4z8XRIKnTG9uh0J3tgOkn7z7TDm7imIx+tGnH3lzxsv3nSA09yblIwcxQrh/+aK7Apu3E35CVQbZVMJWhtr4MODozSkKASsZuOVPzdMHRIagDE5uaNvwctE7wSidJuB1KRikkVT4WuKpHE747jYusU3Rpn5ka47odjd5tshtMo5sLRre6tn+yVkwJAK7QmTr0UOUoniBjYUu9Kcno6huUp97lHP4xe6BQeY9rUMu60GZjNcVGzTz8NQjBu4wpsr+Ntdqd91PRoonotEDBXHEk1YEZ95mrxaH85awwNcv/uA/Ngti3Pn31GjljKo/wYZMwhW7eeyA/nIQ1dN5TljJ4jI7GbXRa7bwllM7UICMQhxZICimVuWi0QjA8dupDhrahfVcH1hCtivL8qhFq+ZeVH/PnalMVuy6WaY +HmxB9Bo caEbH+oZNHdfO7kz7Zw8c9DWOcR6VDrWwIYJNyz30QRimtJwxBcey1nnO7fruNDFSCcvdlmv9Vqy2AP5h4EdEdBCUh0eLy/u6xYM5PoG2gqQ6qEVR9FafNCus/0Fz5L7GLgLvMHcB74H+cqXkWZ6Z0VmcrFnvT+GAWAOE0ZIokMPVhAWU5PN2GluYjAgZbSdXmjRjB/Cru48PJo/RcBZXy6v2zlENwxoB3sHYBtZTHuFxkmQzGnUUkg04Ssb8qLKalEqFY69rINO05dRe5tUUdKQsgWkzFUvUobBRQYrXycBXyDptvLjv+ZzbhNTuZRj0ZanuZYUWR3ni2bY5KrtwiyrbKSMaiSv/9OZc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, 12 Aug 2024, Baolin Wang wrote: > Now the swap device can only swap-in order 0 folio, even though a large > folio is swapped out. This requires us to split the large entry previously > saved in the shmem pagecache to support the swap in of small folios. > > Signed-off-by: Baolin Wang > --- > mm/shmem.c | 100 +++++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 100 insertions(+) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 345e25425e37..996062dc196b 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1990,6 +1990,81 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index, > swap_free_nr(swap, nr_pages); > } > > +static int shmem_split_large_entry(struct inode *inode, pgoff_t index, > + swp_entry_t swap, int new_order, gfp_t gfp) > +{ > + struct address_space *mapping = inode->i_mapping; > + XA_STATE_ORDER(xas, &mapping->i_pages, index, new_order); > + void *alloced_shadow = NULL; > + int alloced_order = 0, i; gfp needs to be adjusted: see fix patch below. > + > + for (;;) { > + int order = -1, split_order = 0; > + void *old = NULL; > + > + xas_lock_irq(&xas); > + old = xas_load(&xas); > + if (!xa_is_value(old) || swp_to_radix_entry(swap) != old) { > + xas_set_err(&xas, -EEXIST); > + goto unlock; > + } > + > + order = xas_get_order(&xas); > + > + /* Swap entry may have changed before we re-acquire the lock */ > + if (alloced_order && > + (old != alloced_shadow || order != alloced_order)) { > + xas_destroy(&xas); > + alloced_order = 0; > + } > + > + /* Try to split large swap entry in pagecache */ > + if (order > 0 && order > new_order) { I have not even attempted to understand all the manipulations of order and new_order and alloced_order and split_order. And further down it turns out that this is only ever called with new_order 0. You may be wanting to cater for more generality in future, but for now please cut this down to the new_order 0 case, and omit that parameter. It will be easier for us to think about the xa_get_order() races if the possibilities are more limited. > + if (!alloced_order) { > + split_order = order; > + goto unlock; > + } > + xas_split(&xas, old, order); > + > + /* > + * Re-set the swap entry after splitting, and the swap > + * offset of the original large entry must be continuous. > + */ > + for (i = 0; i < 1 << order; i += (1 << new_order)) { > + pgoff_t aligned_index = round_down(index, 1 << order); > + swp_entry_t tmp; > + > + tmp = swp_entry(swp_type(swap), swp_offset(swap) + i); > + __xa_store(&mapping->i_pages, aligned_index + i, > + swp_to_radix_entry(tmp), 0); > + } So that is done under xas lock: good. But is the intermediate state visible to RCU readers, and could that be a problem? > + } > + > +unlock: > + xas_unlock_irq(&xas); > + > + /* split needed, alloc here and retry. */ > + if (split_order) { > + xas_split_alloc(&xas, old, split_order, gfp); > + if (xas_error(&xas)) > + goto error; > + alloced_shadow = old; > + alloced_order = split_order; > + xas_reset(&xas); > + continue; > + } > + > + if (!xas_nomem(&xas, gfp)) > + break; > + } > + > +error: > + if (xas_error(&xas)) > + return xas_error(&xas); > + > + return alloced_order; > +} > + > /* > * Swap in the folio pointed to by *foliop. > * Caller has to make sure that *foliop contains a valid swapped folio. > @@ -2026,12 +2101,37 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > /* Look it up and read it in.. */ > folio = swap_cache_get_folio(swap, NULL, 0); > if (!folio) { > + int split_order; > + > /* Or update major stats only when swapin succeeds?? */ > if (fault_type) { > *fault_type |= VM_FAULT_MAJOR; > count_vm_event(PGMAJFAULT); > count_memcg_event_mm(fault_mm, PGMAJFAULT); > } > + > + /* > + * Now swap device can only swap in order 0 folio, then we > + * should split the large swap entry stored in the pagecache > + * if necessary. > + */ > + split_order = shmem_split_large_entry(inode, index, swap, 0, gfp); > + if (split_order < 0) { > + error = split_order; > + goto failed; > + } > + > + /* > + * If the large swap entry has already been split, it is > + * necessary to recalculate the new swap entry based on > + * the old order alignment. > + */ > + if (split_order > 0) { > + pgoff_t offset = index - round_down(index, 1 << split_order); > + > + swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); > + } > + > /* Here we actually start the io */ > folio = shmem_swapin_cluster(swap, gfp, info, index); > if (!folio) { > -- [PATCH] mm: shmem: split large entry if the swapin folio is not large fix Fix all the Unexpected gfp: 0x2 (__GFP_HIGHMEM). Fixing up to gfp: 0x1120d0 (__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_HARDWALL|__GFP_RECLAIMABLE). Fix your code! warnings from kmalloc_fix_flags() from xas_split_alloc() from shmem_split_large_entry(). Fixes: a960844d5ac9 ("mm: shmem: split large entry if the swapin folio is not large") Signed-off-by: Hugh Dickins --- mm/shmem.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/shmem.c b/mm/shmem.c index ae2245dce8ae..85e3bd3e709e 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1999,6 +1999,9 @@ static int shmem_split_large_entry(struct inode *inode, pgoff_t index, void *alloced_shadow = NULL; int alloced_order = 0, i; + /* Convert user data gfp flags to xarray node gfp flags */ + gfp &= GFP_RECLAIM_MASK; + for (;;) { int order = -1, split_order = 0; void *old = NULL; -- 2.35.3