From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73CC6E7716D for ; Wed, 4 Dec 2024 19:34:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C9A56B0088; Wed, 4 Dec 2024 14:34:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 079F06B0089; Wed, 4 Dec 2024 14:34:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E83F86B008A; Wed, 4 Dec 2024 14:34:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CBE1B6B0088 for ; Wed, 4 Dec 2024 14:34:54 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 595D281136 for ; Wed, 4 Dec 2024 19:34:54 +0000 (UTC) X-FDA: 82858279014.04.C02C025 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf09.hostedemail.com (Postfix) with ESMTP id 7106E14000C for ; Wed, 4 Dec 2024 19:34:42 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=gv+1CoH7; spf=pass (imf09.hostedemail.com: domain of chrisl@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733340885; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jg4zNW1gY9VJDVX5EXrKRmucjA5vGea62xK4bBMxUs0=; b=CaejoehDCPbLH2MV1CXQ5sh3is+Hl5UlJVFvzARrKNPqhaijH3HtjUnivgiI34qxZwbRGR zZM6Fiv+AN0shKQYOCkJUWVxz2yadwNr8G7WGdY1LHWrUbaGLqbr5qS60xaeNyF7K4DnxI rpWdtjKz7uy6eXEPRqnUU6GuX8KY0No= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=gv+1CoH7; spf=pass (imf09.hostedemail.com: domain of chrisl@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733340885; a=rsa-sha256; cv=none; b=rMCdPatg5wNqdqzKuEHBDZHvUW3hbYi/0qR5h3i0L3iDWxCSJWz6R1pxkXja2eZgDoP75m ySpbULBOZoSyJz7ISMgCqmYH+rtze9RUIrk2Tp04Ve0H2KlKS55s0gikKBCEGKuJeLSyZO 4ZaoOGfl0z6dySyyFFS6ZPD9TrNeGI8= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 69EC0A41C13 for ; Wed, 4 Dec 2024 19:32:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EF928C4CEE5 for ; Wed, 4 Dec 2024 19:34:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733340891; bh=J/bb4aBypkcHats8gtZZwQYkheISOYsvrT/INOcjFpI=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=gv+1CoH7UPYGxnfE6ASPjanH7U53XvLpbrUcIwS2+XSnYkxHxOMN2ZhZDO+rQP1cP HcUC7DKjTksVqINKWM4LuxB8IjsP4sJqN8uNBTw6CEeQ9P+sCma+WkW7M7kJ02W2gZ c/kFs1A1YNRhO0O/51aUFj/fjta9DxawsL8Q0qJ+u0FLyOhlj9AUr9/Y4RB4NQ1LzR Sw+KF/957RzCrlUfUjT9xhk+4eSMzb0G+klq4LQA67Kz7YCrx12cXhpSbuR2MFKcyn TEc8ypn6gN7Su801ezzrts+m2D2/MloGW4c/ac5SGUo/ZwkeYBUGmiQq2NJR+w2JkP jAvG5z+vmfQ2w== Received: by mail-yb1-f171.google.com with SMTP id 3f1490d57ef6-e399f8bb391so125784276.2 for ; Wed, 04 Dec 2024 11:34:50 -0800 (PST) X-Gm-Message-State: AOJu0YxOdGlFbAJDd8d9ywMd8DIGI76YSDP77XA0YnlYhwDaEzwE6Il7 MSSMVNAJeVW5F/iAWtNG43SvyZnRWGHIL8jYWykZGElqAfp7vGO+No/2w6i/QLJafW892Se0OFr QWzwybETWs1pDNMN4qXNVVQA23G0hIPfptrbdCQ== X-Google-Smtp-Source: AGHT+IGNDd2/EpIq+vn2u7XwmxMnQSDNF/Opmt/3lNt6whqHIt4TmhNR1XZ6RpGGq8yzXshcriMksekAOda3SPM3ydA= X-Received: by 2002:a05:690c:6106:b0:6ee:bcdd:b1fb with SMTP id 00721157ae682-6efad1bdcddmr93194227b3.19.1733340890025; Wed, 04 Dec 2024 11:34:50 -0800 (PST) MIME-Version: 1.0 References: <20241202184154.19321-1-ryncsn@gmail.com> <20241202184154.19321-5-ryncsn@gmail.com> In-Reply-To: <20241202184154.19321-5-ryncsn@gmail.com> From: Chris Li Date: Wed, 4 Dec 2024 11:34:39 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 4/4] mm, swap_cgroup: remove global swap cgroup lock To: Kairui Song Cc: linux-mm@kvack.org, Andrew Morton , Hugh Dickins , "Huang, Ying" , Yosry Ahmed , Roman Gushchin , Shakeel Butt , Johannes Weiner , Barry Song , Michal Hocko , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 7106E14000C X-Stat-Signature: zbdcgm9o5uz38iy4xy39xt9j4ejqgh9q X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1733340882-685148 X-HE-Meta: U2FsdGVkX1/9PuZ0O2B+U/dmE7q1fKedYu6z9jyPwGq3At3ETOZjnfTP828xtB4metYGFSjlS44aNB9h5xWIiAlSUQFKY+7YhVp+FPNhKlpy+Ds6SdRVHlv3PDqs77k2A+7fMUzWArcckeY0e9nGI6ULrq0RVIN/ix5WCJAu5eaQw7KAEb0hLmiNcLObmpXhHxpSeeksVYGZtz9Gbu8vVkmw4u7h5cS5FQL5BYMRZjIr0hSC5imYG12T2e9jCkQNRkBBaT9wFhXCPv0tzBbQaeW52poy28B9oStRxI/nI2o04vIK6zEKi0M7cOxYlWXNJe4yhfoiOQZFSbwt4k219NAm+W2NeBc6Oyq0WQy6qnKIzo7PcG2NbGA1TVixKpWfB+EKhGFWOx3Bc9K5diY5/TWgfxtNoUN+4ELEqmcTObDF2bWcy6a7x9bRSEwTa7DuopktEFvFpupBgfgdugFn07cfmrWO62FLzp4rbPTM8p4SS0hCtrBzsluIjoS8V4kDao3MOr1galkVh8KwzJBJ14BBGcWTnsgHQ2cIuDmB17+AAUWh6Zr6TbXtPsNOhkGkdqUxzYTElAcRh6pE4pIWyjdA9G0rwu2PzbnDUv4k6t0Yx5AaW0zPkf4i3Fzo6nE7fTgl0FOLMGVtyIGGXtEQN1KH9JuVQeTTZpwxPW0fVGhJlRmxYM80+Mwripi2N7MU7TIkWVPujeqjKI5z6XTS8+cE/txnzX+Ex87VUr03eqtLuP3mb+gVKvDAnNPhPzeD+ou4A07jeUbbDVtRS7V3ALS48y0Ts4IGVib3hCyghua0K7e8rb3g1nw3ForLqeIwvJ3cy7z7L23lXmEIUNR4ttcbqDAb0zbUrf8FHmvOlXb2+DJkCcO4S6/Gwzq5RFRbVejjdCn67Qx4kZwivRZuZYgdneuzVmlMreK+hEXMBfrwRIF2dU29O6+5Qh3D1ePN1OH/PL5arxSXsPPFaz0 VJFzuvaM GIhx6YIy4Fsd0dnquL0+7zvKKbsZXcgj1b4viYBWEhPaM2WcwMxHJJYFmaQRpd4vGzulFFICwDrE/RjeTqkQrUF0x9Yu3Lh2138PkdK6W5irWM2IBi33AJsjXLUjqu3PCWD9gC/yV/mhclbRxknSDcdBq5ZQfcPXxRhtIsvBuav0zwnPfyQZ8JXIHDMgT2KLynr7mlj+52LC4EwxzrKnsR7eymmCO9luD0SdlPl87zvHnPonXK27SdUWU5tEGJVowV+zmC97y7KDsJj2MBR/+yxWUTtoJCMvITIedw2fooY5gkRxShGgAyppt0PfRC9VJp4U+bm9ztLGWgg1Os4DkCT3mpxdhMt1DcM1tYQSKYerWoUYCzbJySH9XS6My6rKns+aJ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Dec 2, 2024 at 10:42=E2=80=AFAM Kairui Song wrot= e: > > From: Kairui Song > > commit e9e58a4ec3b1 ("memcg: avoid use cmpxchg in swap cgroup maintainanc= e") > replaced the cmpxchg/xchg with a global irq spinlock because some archs > doesn't support 2 bytes cmpxchg/xchg. Clearly this won't scale well. > > And as commented in swap_cgroup.c, this lock is not needed for map > synchronization. > > Emulation of 2 bytes cmpxchg/xchg with atomic isn't hard, so implement > it to get rid of this lock. > > Testing using 64G brd and build with build kernel with make -j96 in 1.5G > memory cgroup using 4k folios showed below improvement (10 test run): > > Before this series: > Sys time: 10730.08 (stdev 49.030728) > Real time: 171.03 (stdev 0.850355) > > After this commit: > Sys time: 9612.24 (stdev 66.310789), -10.42% > Real time: 159.78 (stdev 0.577193), -6.57% > > With 64k folios and 2G memcg: > Before this series: > Sys time: 7626.77 (stdev 43.545517) > Real time: 136.22 (stdev 1.265544) > > After this commit: > Sys time: 6936.03 (stdev 39.996280), -9.06% > Real time: 129.65 (stdev 0.880039), -4.82% > > Sequential swapout of 8G 4k zero folios (24 test run): > Before this series: > 5461409.12 us (stdev 183957.827084) > > After this commit: > 5420447.26 us (stdev 196419.240317) > > Sequential swapin of 8G 4k zero folios (24 test run): > Before this series: > 19736958.916667 us (stdev 189027.246676) > > After this commit: > 19662182.629630 us (stdev 172717.640614) > > Performance is better or at least not worse for all tests above. > > Signed-off-by: Kairui Song > --- > mm/swap_cgroup.c | 56 +++++++++++++++++++++++++++++++++++------------- > 1 file changed, 41 insertions(+), 15 deletions(-) > > diff --git a/mm/swap_cgroup.c b/mm/swap_cgroup.c > index a76afdc3666a..028f5e6be3f0 100644 > --- a/mm/swap_cgroup.c > +++ b/mm/swap_cgroup.c > @@ -5,6 +5,15 @@ > > #include /* depends on mm.h include */ > > +#define ID_PER_UNIT (sizeof(atomic_t) / sizeof(unsigned short)) You might want to have some compile time assert that (sizeof(atomic_t) % sizeof(unsigned short)) is zero. Could not hurt. > +struct swap_cgroup_unit { > + union { > + int raw; > + atomic_t val; > + unsigned short __id[ID_PER_UNIT]; > + }; > +}; I suggest just getting rid of this complicated struct/union and using bit shift and mask to get the u16 out from the atomic_t. > + > static DEFINE_MUTEX(swap_cgroup_mutex); > > struct swap_cgroup { > @@ -12,8 +21,10 @@ struct swap_cgroup { > }; > > struct swap_cgroup_ctrl { > - unsigned short *map; > - spinlock_t lock; > + union { > + struct swap_cgroup_unit *units; > + unsigned short *map; You really shouldn't access the map as an "unsigned short" array, therefore, I suggest changing the array pointer to "atomic_t". > + }; > }; > > static struct swap_cgroup_ctrl swap_cgroup_ctrl[MAX_SWAPFILES]; > @@ -31,6 +42,24 @@ static struct swap_cgroup_ctrl swap_cgroup_ctrl[MAX_SW= APFILES]; > * > * TODO: we can push these buffers out to HIGHMEM. > */ > +static unsigned short __swap_cgroup_xchg(void *map, > + pgoff_t offset, > + unsigned int new_id) > +{ > + unsigned int old_id; > + struct swap_cgroup_unit *units =3D map; > + struct swap_cgroup_unit *unit =3D &units[offset / ID_PER_UNIT]; > + struct swap_cgroup_unit new, old =3D { .raw =3D atomic_read(&unit= ->val) }; > + > + do { > + new.raw =3D old.raw; > + old_id =3D old.__id[offset % ID_PER_UNIT]; > + new.__id[offset % ID_PER_UNIT] =3D new_id; > + } while (!atomic_try_cmpxchg(&unit->val, &old.raw, new.raw)); I suggest just calculating the atomic_t offset (offset / ID_PER_UNIT) and getting the address of the atomic_t. Then use the mask and shift to construct the new atomic_t value. It is likely to generate better code. You don't want the compiler to generate memory load and store for constructing the temporary new value. I haven't checked the machine generated code, I suspect the compiler is not smart enough to convert those into register shift here. Which is what you really want. > + > + return old_id; > +} > + > /** > * swap_cgroup_record - record mem_cgroup for a set of swap entries > * @ent: the first swap entry to be recorded into > @@ -44,22 +73,19 @@ unsigned short swap_cgroup_record(swp_entry_t ent, un= signed short id, > unsigned int nr_ents) > { > struct swap_cgroup_ctrl *ctrl; > - unsigned short *map; > - unsigned short old; > - unsigned long flags; > pgoff_t offset =3D swp_offset(ent); > pgoff_t end =3D offset + nr_ents; > + unsigned short old, iter; > + unsigned short *map; Make it an atomic_t pointer here as well. > > ctrl =3D &swap_cgroup_ctrl[swp_type(ent)]; > map =3D ctrl->map; > > - spin_lock_irqsave(&ctrl->lock, flags); > - old =3D map[offset]; > + old =3D READ_ONCE(map[offset]); Ah, you shouldn't perform u16 reading directly. That will get into the endian problem of how the u16 is arranged into atomic_t. You should do atomic reading then shift the bits out so you don't have the endian problem. It is a bad idea mixing atomic updates and reading the middle of the atomic address location. Chris > do { > - VM_BUG_ON(map[offset] !=3D old); > - map[offset] =3D id; > + iter =3D __swap_cgroup_xchg(map, offset, id); > + VM_BUG_ON(iter !=3D old); > } while (++offset !=3D end); > - spin_unlock_irqrestore(&ctrl->lock, flags); > > return old; > } > @@ -85,20 +111,20 @@ unsigned short lookup_swap_cgroup_id(swp_entry_t ent= ) > > int swap_cgroup_swapon(int type, unsigned long max_pages) > { > - void *map; > + struct swap_cgroup_unit *units; > struct swap_cgroup_ctrl *ctrl; > > if (mem_cgroup_disabled()) > return 0; > > - map =3D vzalloc(max_pages * sizeof(unsigned short)); > - if (!map) > + units =3D vzalloc(DIV_ROUND_UP(max_pages, ID_PER_UNIT) * > + sizeof(struct swap_cgroup_unit)); > + if (!units) > goto nomem; > > ctrl =3D &swap_cgroup_ctrl[type]; > mutex_lock(&swap_cgroup_mutex); > - ctrl->map =3D map; > - spin_lock_init(&ctrl->lock); > + ctrl->units =3D units; > mutex_unlock(&swap_cgroup_mutex); > > return 0; > -- > 2.47.0 >