From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.7 required=3.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37C26C433DB for ; Thu, 21 Jan 2021 16:08:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B50DA23A1E for ; Thu, 21 Jan 2021 16:08:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B50DA23A1E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 21DDE6B0008; Thu, 21 Jan 2021 11:08:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F4736B000A; Thu, 21 Jan 2021 11:08:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 131086B000C; Thu, 21 Jan 2021 11:08:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id EF5566B0008 for ; Thu, 21 Jan 2021 11:08:03 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 94E124DAB for ; Thu, 21 Jan 2021 16:08:02 +0000 (UTC) X-FDA: 77730263604.27.slope50_4101ba727564 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 624103D663 for ; Thu, 21 Jan 2021 16:08:02 +0000 (UTC) X-HE-Tag: slope50_4101ba727564 X-Filterd-Recvd-Size: 4251 Received: from raptor.unsafe.ru (raptor.unsafe.ru [5.9.43.93]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Thu, 21 Jan 2021 16:08:01 +0000 (UTC) Received: from example.org (ip-94-112-41-137.net.upcbroadband.cz [94.112.41.137]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by raptor.unsafe.ru (Postfix) with ESMTPSA id D51E2209D4; Thu, 21 Jan 2021 16:07:46 +0000 (UTC) Date: Thu, 21 Jan 2021 17:07:42 +0100 From: Alexey Gladkov To: "Eric W. Biederman" Cc: Linus Torvalds , LKML , io-uring , Kernel Hardening , Linux Containers , Linux-MM , Andrew Morton , Christian Brauner , Jann Horn , Jens Axboe , Kees Cook , Oleg Nesterov Subject: Re: [RFC PATCH v3 1/8] Use refcount_t for ucounts reference counting Message-ID: <20210121160742.evd3632lepfytlxb@example.org> References: <116c7669744404364651e3b380db2d82bb23f983.1610722473.git.gladkov.alexey@gmail.com> <20210118194551.h2hrwof7b3q5vgoi@example.org> <20210118205629.zro2qkd3ut42bpyq@example.org> <87eeig74kv.fsf@x220.int.ebiederm.org> <20210121120427.iiggfmw3tpsmyzeb@example.org> <87ft2u2ss5.fsf@x220.int.ebiederm.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87ft2u2ss5.fsf@x220.int.ebiederm.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.6.1 (raptor.unsafe.ru [5.9.43.93]); Thu, 21 Jan 2021 16:08:00 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jan 21, 2021 at 09:50:34AM -0600, Eric W. Biederman wrote: > >> The current ucount code does check for overflow and fails the increment > >> in every case. > >> > >> So arguably it will be a regression and inferior error handling behavior > >> if the code switches to the ``better'' refcount_t data structure. > >> > >> I originally didn't use refcount_t because silently saturating and not > >> bothering to handle the error makes me uncomfortable. > >> > >> Not having to acquire the ucounts_lock every time seems nice. Perhaps > >> the path forward would be to start with stupid/correct code that always > >> takes the ucounts_lock for every increment of ucounts->count, that is > >> later replaced with something more optimal. > >> > >> Not impacting performance in the non-namespace cases and having good > >> performance in the other cases is a fundamental requirement of merging > >> code like this. > > > > Did I understand your suggestion correctly that you suggest to use > > spin_lock for atomic_read and atomic_inc ? > > > > If so, then we are already incrementing the counter under ucounts_lock. > > > > ... > > if (atomic_read(&ucounts->count) == INT_MAX) > > ucounts = NULL; > > else > > atomic_inc(&ucounts->count); > > spin_unlock_irq(&ucounts_lock); > > return ucounts; > > > > something like this ? > > Yes. But without atomics. Something a bit more like: > > ... > > if (ucounts->count == INT_MAX) > > ucounts = NULL; > > else > > ucounts->count++; > > spin_unlock_irq(&ucounts_lock); > > return ucounts; This is the original code. > I do believe at some point we will want to say using the spin_lock for > ucounts->count is cumbersome, and suboptimal and we want to change it to > get a better performing implementation. > > Just for getting the semantics correct we should be able to use just > ucounts_lock for locking. Then when everything is working we can > profile and optimize the code. > > I just don't want figuring out what is needed to get hung up over little > details that we can change later. OK. So I will drop this my change for now. -- Rgrds, legion