From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C542CEB2D2 for ; Tue, 1 Oct 2024 01:14:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A4A426B029D; Mon, 30 Sep 2024 21:14:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FAC96B029E; Mon, 30 Sep 2024 21:14:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 89B956B029F; Mon, 30 Sep 2024 21:14:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6B4676B029D for ; Mon, 30 Sep 2024 21:14:34 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 14CC6160D74 for ; Tue, 1 Oct 2024 01:14:34 +0000 (UTC) X-FDA: 82623263268.28.9970F00 Received: from mail-qk1-f178.google.com (mail-qk1-f178.google.com [209.85.222.178]) by imf26.hostedemail.com (Postfix) with ESMTP id BF32014000C for ; Tue, 1 Oct 2024 01:14:31 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b="ZGPahu/L"; spf=pass (imf26.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.178 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727745145; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0s1DjarmO4wOj63Wm0i8dMF+ZESKuDscREAVAOOO+n0=; b=pZfhgpwp3v30nYBCt20WRppsNAouT7hXUW54cF2UJ7R4I0hnwmKBHt720X7HDtj2yN6Qyp EDJi0+4ZJ0WIO2rQ+mAlhkCpbouZCe+Qxim9dtcm7MuRwNe/39J3uXZxOc8ShA00pbYMAk xCSbuYOBrtRAfG5oJOkgyTouz2VetUY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727745145; a=rsa-sha256; cv=none; b=ouM0RROfpxZUa/5wSXMP8WdfFvCqs+gIQ4lC1raiDTqqECSrdXtDEBZq8VFqi9y+dNboJm zQfQHCZg/m0l8EiC7iTDoAFpReWo8U9d7l7QUiSyEwb6F8gOfk2GLjj7fT2cp/8CqGWZsE joIARNmcHLJewUjFXqFc2KNhpxFUzuc= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b="ZGPahu/L"; spf=pass (imf26.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.178 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org Received: by mail-qk1-f178.google.com with SMTP id af79cd13be357-7a9ae0e116cso506561285a.1 for ; Mon, 30 Sep 2024 18:14:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1727745270; x=1728350070; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=0s1DjarmO4wOj63Wm0i8dMF+ZESKuDscREAVAOOO+n0=; b=ZGPahu/L2o/OGtm+xdWxrL5rclFGPIsAWM4xfROoFxXKmFJtWUWrDylaKKxH7WZDK8 dQbOxkgUXS6hZg4BpyUAEaEVH2soxPLS+CgGNcqpEqGCmyqjtImZFLTrYIOcmLdajZmn ekhazedaYV9tTKI68znf1bBc86vQ7VrfO0Evzmr6QVvJ5C+rhuHdkOZ4t2EHdzvvNydb VMDcFiMfdmpnAZFO9iZXpST05OOa1u4PspjAOsOJo0kjSfZWZ3AbieeLVogwd2nwi88X 1bg983m797te/1VrpGwGJpcP2irvMwSFEpiEwF6BCuCdPmEHvAqwXcfrFJ3qhXsKh11T XiFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727745270; x=1728350070; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=0s1DjarmO4wOj63Wm0i8dMF+ZESKuDscREAVAOOO+n0=; b=irk7Uv+VVyJMO0GghyVcXxVzu9pLqnb7sm3SPf3zS5j6eH+cAUEMNc0vCmDh+giEkF 0Es+IRkWbRz/90UQPRm8rVeB7PsrNAyRgsYXXUHE4Yu0+uh0Q9RJjFUwef0wM6BPpN2C hDIXMavxcCnI/rH99d3BPx9HJY8z1MI7Re5ymT2CDLJ42c6wlGBTS2qhnq4HvUqEt3KC ZEJSZ3IhGTijNGRizqX/kEmbjM8cNkNDM2Y5v9K/RefX5t8pWhQNo13L5ygAeTFn8UAz tX3XBku9ayqu9q+K/K/PMfHGMOr0dwmjNts/PN3JrdDXQtf8MjeQGhNyFvmF4BWxlTfW Cv3w== X-Forwarded-Encrypted: i=1; AJvYcCUjuUqHOPY1caa5PkqCj2sMNq1fYD+RulSMtd7FglQnLbWaAaYGSWT+J8uHetvPFZa593tk8sbyUg==@kvack.org X-Gm-Message-State: AOJu0YxUjHgvX2lM0UYZ9Z4XXY6fkqkU1MAqP1bnyBr8dvQMxloG5scO shdb0QxYDj96KYbOrIkMmg8NhJJ58X6HlB1ndTfDOAjbKrhebljR4VDt9XSPizM= X-Google-Smtp-Source: AGHT+IHCSV2jRsOeNTYdq31Eeb3YuJQ7np5Z2rA9CMANG0M4kXbYHAYPp19mWIHmWlI1FzZwj5l3ng== X-Received: by 2002:a05:620a:394f:b0:7ad:c2ab:c643 with SMTP id af79cd13be357-7ae3798d7c0mr2431225685a.63.1727745270474; Mon, 30 Sep 2024 18:14:30 -0700 (PDT) Received: from localhost ([2603:7000:c01:2716:97cf:7b55:44af:acd6]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7ae37858425sm452506385a.135.2024.09.30.18.14.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Sep 2024 18:14:29 -0700 (PDT) Date: Mon, 30 Sep 2024 21:14:24 -0400 From: Johannes Weiner To: Kanchana P Sridhar Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, shakeel.butt@linux.dev, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, willy@infradead.org, nanhai.zou@intel.com, wajdi.k.feghali@intel.com, vinodh.gopal@intel.com Subject: Re: [PATCH v9 6/7] mm: zswap: Support large folios in zswap_store(). Message-ID: <20241001011424.GB1349@cmpxchg.org> References: <20240930221221.6981-1-kanchana.p.sridhar@intel.com> <20240930221221.6981-7-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240930221221.6981-7-kanchana.p.sridhar@intel.com> X-Stat-Signature: d8qjeyijz4aftba83xn93uri9d4p1h5d X-Rspamd-Queue-Id: BF32014000C X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1727745271-728955 X-HE-Meta: U2FsdGVkX19on0gP+olq2O1TClIodx4thKorEhBAieaMLUNCJMpv0BnYuL2LTtP57NOU1hyFQMoZ8KcpKYYkbezekxyhGinoTqDeBhzWlo/DHE0KLVl0TgdbvfD8nVJ/quwBmxWUVflJTAG+OEzOc8Y7guZ1gaKFOVi9/LSbV3hxRvagGxKiorFH8q2ylcDap7hh7eXuBLGgdUWR4Wl32NZ8rmdHOAVaomj5W8kvQaTtwd0rg6V6USWlajEnZBIc0V4Lg8XL/dt6xRnK9lxctAoTd/pL5i1pKGcrScRyIo4w5CAwuN8bmuWdts72FjCJ2PvBEWK+YeXo58Q0lzKGbkTW6ESR35ziHRSF4Qbo2wkWv8g8bgRjj0GPHRz5cWrhanVp9bfA37VhUbv8IjP47NcEP/9PjygUXMNaUr65AMwL+okprFr/PWlhmDdnRxhHQ06cJymXcPnZJAlp9n0JXmluBUIOavAez+MIdVZMgwmL7KvM1D6liCugRZ6+OHNx+rCEjPGAlgw3XGTpqpuJSJa+n3QcdA6OzWn/vj1pUmK/h8MHppUKW6R+OOHxIy7fKY32C1rpD7/irvzyGSUwK6RJy59DBciQ2zQ8mbcItlEyBVUUfwECYzxEe0mrKUIsC8OGit7HRetcXYUduq2C4cMNjqi8LAVNRm22REJiGcNLS0eyeadAaLoLE/pbKNAIO14mmOPEWFcgLTH73sHeiHcDujjg/jJzvgK05YkTA/npSqJcpespcEaqjBXTEIlaA9y5nHwAEV3Hurtz7RaoMBRgWiX+xygXGI3RGEshLruM/B9kbxN1862/f6GTk2lsdKGmYL4qYXc7/xjJHb0bsVMLzj3CHgeZxp76boPgLEs6MouubCI+9SDIKPsxXyu13Ym5nYTInUMUn49AbpcFySolrkv+2qSMCDNTTc8ee8TNlFNVcU6zUaRy9yiU+O59iK0aBTbcMm6ZBOh5jst /Vzocwau AjuwuuaUMJtcX22UQPZ6ZrSzJD0gWvR/lNYydYEcMZ3/8z8dnN7QAZkOFDtj2Fqa4rmmNP8NLqopRwCimtsUnvfTqRLjX2AoZ+b16Io4H3Upsgn9Hb2HueI+xBOnLhU0llrabW2UFw4TyKRs1H6z4dU4L+wb7QJZenZq7H+sswfviHtnvaRw/SJGJmd9djr40DDGsawjp/UwmLMrntUOt91rUc0KMztql6qo0ieBtEPq5Mo+mMizxsbvFaxWT225sr+kN39SK1aP12rafaulD/pnmJg6I2b+UePC1GIJgm/kg7t7butYBWJWXgvbNgm6MaFcitTDZ66EFGkjoWVZjn46cT3GZYhWlVQPrSxUZPxbkCamf3N5A/eCKPFkUNEskjAGnxo8uNXs6qopHzpmU3zhBDCoRwJ0h6jjrztgJM4oDir1rgU+O2x2Zj2IIXElZbfp+ox1cTqJEaufIbnTbB2F2gz54Lx1ykE6lxOIqA080dNTJlBdc5Wppc6VKWp9raJDi X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Sep 30, 2024 at 03:12:20PM -0700, Kanchana P Sridhar wrote: > /********************************* > * main API > **********************************/ > -bool zswap_store(struct folio *folio) > + > +/* > + * Stores the page at specified "index" in a folio. There is no more index and no folio in this function. > + * > + * @page: The page to store in zswap. > + * @objcg: The folio's objcg. Caller has a reference. > + * @pool: The zswap_pool to store the compressed data for the page. > + * The caller should have obtained a reference to a valid > + * zswap_pool by calling zswap_pool_tryget(), to pass as this > + * argument. > + * @tree: The xarray for the @page's folio's swap. This doesn't look safe. If the entries were to span a SWAP_ADDRESS_SPACE_SHIFT boundary, the subpage entries would need to be spread out to different trees also. Otherwise, it would break loading and writeback down the line. I *think* it works right now, but at best it's not very future proof. Please look up the tree inside the function for the specific swp_entry_t that is being stored. Same for the unwind/check_old: section. > + * @compressed_bytes: The compressed entry->length value is added > + * to this, so that the caller can get the total > + * compressed lengths of all sub-pages in a folio. > + */ With just one caller, IMO the function comment can be dropped... > /* allocate entry */ > - entry = zswap_entry_cache_alloc(GFP_KERNEL, folio_nid(folio)); > + entry = zswap_entry_cache_alloc(GFP_KERNEL, folio_nid(page_folio(page))); page_to_nid() is safe to use here. > +bool zswap_store(struct folio *folio) > +{ > + long nr_pages = folio_nr_pages(folio); > + swp_entry_t swp = folio->swap; > + struct xarray *tree = swap_zswap_tree(swp); > + struct obj_cgroup *objcg = NULL; > + struct mem_cgroup *memcg = NULL; > + struct zswap_pool *pool; > + size_t compressed_bytes = 0; > + bool ret = false; > + long index; > + > + VM_WARN_ON_ONCE(!folio_test_locked(folio)); > + VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); > + > + if (!zswap_enabled) > + goto check_old; > + > + /* > + * Check cgroup zswap limits: > + * > + * The cgroup zswap limit check is done once at the beginning of > + * zswap_store(). The cgroup charging is done once, at the end > + * of a successful folio store. What this means is, if the cgroup > + * was within the zswap_max limit at the beginning of a large folio > + * store, it could go over the limit by at most (HPAGE_PMD_NR - 1) > + * pages due to this store. > + */ > + objcg = get_obj_cgroup_from_folio(folio); > + if (objcg && !obj_cgroup_may_zswap(objcg)) { > + memcg = get_mem_cgroup_from_objcg(objcg); > + if (shrink_memcg(memcg)) { > + mem_cgroup_put(memcg); > + goto put_objcg; > + } > + mem_cgroup_put(memcg); > + } > + > + /* > + * Check zpool utilization against zswap limits: > + * > + * The zswap zpool utilization is also checked against the limits > + * just once, at the start of zswap_store(). If the check passes, > + * any breaches of the limits set by zswap_max_pages() or > + * zswap_accept_thr_pages() that may happen while storing this > + * folio, will only be detected during the next call to > + * zswap_store() by any process. > + */ > + if (zswap_check_limits()) > + goto put_objcg; There has been some back and forth on those comments. Both checks are non-atomic and subject to races, so mentioning the HPAGE_PMD_NR - 1 overrun is somewhat misleading - it's much higher in the worst case. Honestly, I would just get rid of the comments. You're not changing anything fundamental in this regard, so I don't think there is a burden to add new comments either. > + > + pool = zswap_pool_current_get(); > + if (!pool) > + goto put_objcg; > + > + if (objcg) { > + memcg = get_mem_cgroup_from_objcg(objcg); > + if (memcg_list_lru_alloc(memcg, &zswap_list_lru, GFP_KERNEL)) { > + mem_cgroup_put(memcg); > + goto put_pool; > + } > + mem_cgroup_put(memcg); > + } > + > + /* > + * Store each page of the folio as a separate entry. If we fail to > + * store a page, unwind by deleting all the pages for this folio > + * currently in zswap. > + */ The first sentence explains something that is internal to zswap_store_page(). The second sentence IMO is obvious from the code itself. I think you can delete this comment. > + for (index = 0; index < nr_pages; ++index) { > + if (!zswap_store_page(folio_page(folio, index), objcg, pool, tree, &compressed_bytes)) > + goto put_pool; Hah, I'm not a stickler for the 80 column line limit, but this is pushing it ;) Please grab the page up front. Yosry had also suggested replacing the compressed_bytes return parameter with an actual return value. Basically, return compressed bytes on success, -errno on error. I think this comment was missed among the page_swap_entry() discussion. for (index = 0; index < nr_pages; index++) { struct page *page = folio_page(folio, index); int bytes; bytes = zswap_store_page(page, object, pool, tree); if (bytes < 0) goto put_pool; total_bytes += bytes; }