From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C867EB64D7 for ; Wed, 21 Jun 2023 13:17:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CFC1D8D0005; Wed, 21 Jun 2023 09:17:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C5D3B8D0003; Wed, 21 Jun 2023 09:17:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AAFE58D0005; Wed, 21 Jun 2023 09:17:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 925BD8D0003 for ; Wed, 21 Jun 2023 09:17:25 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 6671F1606D8 for ; Wed, 21 Jun 2023 13:17:25 +0000 (UTC) X-FDA: 80926806450.30.D9AEF2C Received: from mail-oi1-f178.google.com (mail-oi1-f178.google.com [209.85.167.178]) by imf20.hostedemail.com (Postfix) with ESMTP id 658C31C0009 for ; Wed, 21 Jun 2023 13:17:22 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=E5HvcT69; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf20.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.167.178 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687353442; a=rsa-sha256; cv=none; b=m6rFQOiY/LlEEchxSSdlqTHaQuW2YKy4wJYjMlGg5G9s/9fTKpyujnLPu3FTDjSINrOZga 6tr8ZECm/wf1VKzK4pbu8aL2nWjRFDzrt62nB7WUGCkKcUquayxIhA9t0+p0pfIVzdus0G rHC/V+T19nclEPljfW3Fxtve0wmKNTA= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=E5HvcT69; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf20.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.167.178 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687353442; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fveJ7t6dKInkV6jkCt7drrsyoXqW38rNIYDTeWubez4=; b=0kUocCDrKT1i/Aa/1xDpQKANs/eMeGmADhs3Ugn1CXoVWUY7Qs+6/u3OclZmJ+qCmjHMsz 4pU4U3AQmE3dvqdMAkyyePN7GmjUtTMhtCHTIHJMWc/zCOPzhPSU8ea3z60y6hzR+U6gRp pFExv3tGF1jDaMjUevCyb5/hkxgBqF0= Received: by mail-oi1-f178.google.com with SMTP id 5614622812f47-3a04cb10465so287015b6e.3 for ; Wed, 21 Jun 2023 06:17:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1687353441; x=1689945441; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=fveJ7t6dKInkV6jkCt7drrsyoXqW38rNIYDTeWubez4=; b=E5HvcT690zg+3SJvbU2ye/7aCf3qOHKmVyWxSZ8zZFFCLcVymtvrkKamjL7GOEDwIF W5p00yU/aYFHwRrAYv08KgzzdbAHMwxmFf/QkHGs3VXRTjepf1PAkD6xWULRRCOj3kfJ nncOnnlGuNfdUacAr4Xub0PoYo1811vMIVkOA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687353441; x=1689945441; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=fveJ7t6dKInkV6jkCt7drrsyoXqW38rNIYDTeWubez4=; b=BDPUehHwUgSo24+8dIDzJiVORBzyPewuY/eq/qES3T2eg2bQ7oOVaYD/japMGTUJH2 qcskmkJpkXJ5QK70SBi2Zm/E02JrOHkGoXvIcKtNpalGWU9nRytmi3Zz1HGu6zbdNgmP eIDDcT9ISkcYh3prsrwY+i0brGXYoauUb/QA6XCUMk1mhJesTJDZ/htWJnDQTVLKM+gU 0l9COll9HFYqsb1ED7XsnlCO0G/Zci5tx60A0FWBZYCSqaatsUkgzHtIaOk6Dzt+fgsL iIVcfOHlA1syS5662ydZ+EPQ8yvSMiyqI2AFSihhhlJjsKBwkVyWoNqkOHwAFbYQwYkk mnQg== X-Gm-Message-State: AC+VfDx24p8PdJ1syQjrwQyp91gqP0n/Sv5xSJ9O7EzM7AekxjlfcElO XAtUDg89cDcCtz/j3n/HauyoGg== X-Google-Smtp-Source: ACHHUZ7q8wqmAuAZwE/ilkAlX8mW6biKsOIk7awi7X5/HP2WZyi0pwxE24q2pSrC5SxjG0I6wVKBCA== X-Received: by 2002:a54:4518:0:b0:39e:de8b:54a2 with SMTP id l24-20020a544518000000b0039ede8b54a2mr8097176oil.29.1687353441344; Wed, 21 Jun 2023 06:17:21 -0700 (PDT) Received: from google.com (KD124209188001.ppp-bb.dion.ne.jp. [124.209.188.1]) by smtp.gmail.com with ESMTPSA id fr3-20020a17090ae2c300b0024de39e8746sm8952526pjb.11.2023.06.21.06.17.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jun 2023 06:17:20 -0700 (PDT) Date: Wed, 21 Jun 2023 22:17:16 +0900 From: Sergey Senozhatsky To: Alexey Romanov , Minchan Kim Cc: Sergey Senozhatsky , "akpm@linux-foundation.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , kernel Subject: Re: [PATCH v1 1/2] zsmalloc: add allocated objects counter for subpage Message-ID: <20230621131716.GC2934656@google.com> References: <20230619143506.45253-1-avromanov@sberdevices.ru> <20230619143506.45253-2-avromanov@sberdevices.ru> <20230620103629.GA42985@google.com> <20230620111635.gztldehfzvuzkdnj@cab-wsm-0029881> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230620111635.gztldehfzvuzkdnj@cab-wsm-0029881> X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 658C31C0009 X-Stat-Signature: g44zjjgaefbs3rc9x8rjn8hhiinnfo3d X-HE-Tag: 1687353442-773443 X-HE-Meta: U2FsdGVkX19khgS1nvnSuHhwqbs3oUOBNax1xyp62xmy+e+35z/pOhOvXFdd6Uh2gSPdq+6qNSN770ql+TcPNsoY7retWM4irAuaB3OslX6d4mwMSvDzQWColLijUg9eneF8jQeW/2iOYoOxGhOPRBzdJksefine7l8DHbfQN2fKwZaOCK5RQ038PFwPs7JC845a0yknXwnlsaB9bK0PLsVKua+7jzTDtzo4ImIXJCnC0lft0nUHHUfNeIrZPM5/VemJOcX+ZMdqHoL26l5BxYYnl+loT3tqZKmQi29kN7hFWbTjsts8lHxYDox6sbRudILY9hfOXYgw75EaXO5tahyCVRo1Rcc5qYNIiSjIMu22RIwa9M+jOv8tOINj0uhMvcLcgmXPQAVKLF58apOJhR5NHQZF0Z/zqGZ0L8hTudp0K/A/ZwPx8XZwdUplPBV1FsbSe4hDvfPSbqUFy1JWdUiU1ypBXKiKcMwQDTwwy7fa7FFuzj/rhWyDI7uCyDjGK2ez9K/2BPOsLxTUgF7CKfGaFXL5Yw2NzYRQj17Z/bfX+VoVet5EU+hodOqYih2Bzmd4T42g6sjfBWg/+BEDCHdSvyLNG3QnQS/Q2LIYmcPJvW4x9JPMCms+Gh5JhRpUmMV5Hk9tSgZWpljjXa5zzjG2ENKism/8s3+NrS68YOa0Sv7VuFGOtLKsCjqu6EZ7dHPVL3cCEPFXZg9CRQ42z+Yfbp5Vd0zt7Zi8vDuca1dWiICvhKj5rY1eqzBnaA3KI0NtRjbm5NAaBQnNcsiUWLnDReyjNc4TpwNb/IzMaFg3muAD+WhmxBNEkOnKPSTE3LuDvFY0Aot0PYiDewaQAdoL47vIgI3RCIIo9sfWQZ7M6tYuUv8SdN9BcCrKd55CXE1rM8lXyPQT96scQBjvrrGaFcUVVTsocNHL1d2wDqxvR+u6tFn5hDDx7k+JTgoiXpqnF5b7sbIiNAKh1aC 7DrYMK1e oy0qFrtVbwqlAxPmlDSXIfcP9lj+uCGnCDIPB6xrmzKoWlpbohMvjxHOiXiQy6l0MmRMY37baQyRqPW02pgKxfVH5JEHlNiQ/sea54VbECWyio+fRF6FqvjHvJhJyGE5rBc+F+wRPHIctyHQjK26JsRWXwB6QTJ95YfCZIWw9KJHB1oLmLITcIm889L0pFUtw4WJvbxeiweD59ACjkOr1Qw3pXIM9VIGP2CMwj+gVmpO49wCBd+chlEjuzrCffGYpjl1tUdsBkz1LACWYau1ZaV+ZqiZpKMT7Yp0VwKYl65qZRxhE20Xb4tQFPxNT6dEPPJ79Onx52C2NyiOb24ERSH5v63goC4Dzm6d2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On (23/06/20 11:16), Alexey Romanov wrote: > If sizeof(unsigned int) >= 32 bits the this will be enough for us. > Of course, in rare cases this will not be the case. But it seems that > zram and kernel already has similiar places. For example, if page size > is 256 Kb and sizeof(unsigned int) = 16 bits (2 byte), zram will not > wotk on such system, because we can't store offset. But such case is > very rare, most systems have unsigned int over 32 bits. > > Therefore, I think that my idea is still applicable, we just need to > change the counter type. What do you think? My gut feeling is that we better avoid mixing in architecture specific magic into generic code. It works fine until it doesn't. May be Minchan will have a different opinion tho. There can be other ways to avoid linear scan of empty sub-pages. For instance, something like below probably can cover less cases than your patch 0002, but on the other hand is rather generic, trivial and doesn't contain any assumptions on the architecture specifics. (composed/edited in mail client, so likely is broken, but outlines the idea) ==================================================================== mm/zsmalloc: do not scan empty zspages We already stop zspage migration when we detect that target zspage has no space left for any new objects. There is one more thing we can do in order to avoid doing useless work: stop scanning for allocated objects in sub-pages when we have migrated the last inuse object from the zspage in question. diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 02f7f414aade..2875152e6497 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1263,6 +1263,11 @@ static bool zspage_full(struct size_class *class, struct zspage *zspage) return get_zspage_inuse(zspage) == class->objs_per_zspage; } +static bool zspage_empty(struct zspage *zspage) +{ + return get_zspage_inuse(zspage) == 0; +} + /** * zs_lookup_class_index() - Returns index of the zsmalloc &size_class * that hold objects of the provided size. @@ -1787,6 +1792,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class, obj_idx++; record_obj(handle, free_obj); obj_free(class->size, used_obj, NULL); + + /* Stop if there are no more objects to migrate */ + if (zspage_empty(get_zspage(s_page))) + break; }