From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCF4EC433F5 for ; Mon, 1 Nov 2021 14:23:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 40E7160F70 for ; Mon, 1 Nov 2021 14:23:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 40E7160F70 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 985FC940015; Mon, 1 Nov 2021 10:23:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 90D8694000F; Mon, 1 Nov 2021 10:23:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AE11940015; Mon, 1 Nov 2021 10:23:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0083.hostedemail.com [216.40.44.83]) by kanga.kvack.org (Postfix) with ESMTP id 6798B94000F for ; Mon, 1 Nov 2021 10:23:21 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0EE5C8249980 for ; Mon, 1 Nov 2021 14:23:21 +0000 (UTC) X-FDA: 78760579002.01.C00B01A Received: from mail-lj1-f179.google.com (mail-lj1-f179.google.com [209.85.208.179]) by imf17.hostedemail.com (Postfix) with ESMTP id A11D4F00039B for ; Mon, 1 Nov 2021 14:23:20 +0000 (UTC) Received: by mail-lj1-f179.google.com with SMTP id h11so29730166ljk.1 for ; Mon, 01 Nov 2021 07:23:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=lDzQzAKmrO4cE12pbgUF6sOtfYcuXG+MDS8ZFstUnZs=; b=A8I+yiJOK6RGvfYlNHU859Gr6ZKPAJIavCw93TUSQA/psdn1XnKFa1ouGoJ1WSawOt NEuLktnHURzeQafTm4cn87bl3HoJNfpJLwCLS24aCcQZjSrTxkBHaNWKZHiO5xdKdcsq KPBzvs7/MVNl4Z4Z60n5M05PRhYHlxin56Fy6vW5SpUkSRVQ1RnCEVByU+HoxkVPakMk TEEhAJLE/PPgLTo0i7NaL43ZZlXwjggucMSw5pFgLHfsU7wnm7FsELBd4EiStLZ0d8/e EbtjN+COC37/C75r4Tfl2mTJAUgvE3fDGomiZAU8gkiCm6eNg94FYN8I/I4DSPRQDC2q zOWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=lDzQzAKmrO4cE12pbgUF6sOtfYcuXG+MDS8ZFstUnZs=; b=eSEx9o/m7hp2wQC+OiQs0n6/UVkBYcsObNLA6+jyi9EIdpoHrh9+xHQv4oIa4kmwyr gVv6WMoWq5fbaTl2uNZfmchlswDUJwp+F91pRNCQBHcc5ulpNc4yXgjzh8JjVOknmYfe pz9K0g61uKf78adq016f137LQMDCndkibdj0lULXISC/R6gB3shMQnPzgY8LCGGIWljf CDZPCK90+Dr2S5Lf2gSJowQG49JGgXQGsG2VsVp9iflHkugcu0ArN2kY8J/JNlBP3CVD FKvvdHS8dz0VFvDXhWc1V90YLq7ePCtLJn3y7SKVoF/3CZmQyONA+zo7/z21sp7rZ9Q3 XMBA== X-Gm-Message-State: AOAM530oLBJ5gkIByXzgVE3irQFq+uYyTgtvGZP/aGiNT2Ytb8UjFHna leYLWUZMVdOha72NnS43e27w3cBf7od0+i5xlrmdLw== X-Google-Smtp-Source: ABdhPJw2SDdjdrCAgvF2ObKwyMwAY8ht9SsxgV39b77Et+zkpNgad6QMEQngm1Q75lk8lMfydQmMHUB46JiVHGbLEAA= X-Received: by 2002:a05:651c:104b:: with SMTP id x11mr28045714ljm.422.1635776598895; Mon, 01 Nov 2021 07:23:18 -0700 (PDT) MIME-Version: 1.0 References: <20211026173822.502506-1-pasha.tatashin@soleen.com> <20211026173822.502506-4-pasha.tatashin@soleen.com> <7b131cb1-68d8-6746-f9c1-2b01d4838869@nvidia.com> <19d16b40-355f-3f79-dcba-e1d8d2216d33@nvidia.com> In-Reply-To: <19d16b40-355f-3f79-dcba-e1d8d2216d33@nvidia.com> From: Pasha Tatashin Date: Mon, 1 Nov 2021 10:22:43 -0400 Message-ID: Subject: Re: [RFC 3/8] mm: Avoid using set_page_count() in set_page_recounted() To: John Hubbard Cc: LKML , linux-mm , linux-m68k@lists.linux-m68k.org, Anshuman Khandual , Matthew Wilcox , Andrew Morton , william.kucharski@oracle.com, Mike Kravetz , Vlastimil Babka , Geert Uytterhoeven , schmitzmic@gmail.com, Steven Rostedt , Ingo Molnar , Johannes Weiner , Roman Gushchin , Muchun Song , weixugc@google.com, Greg Thelen Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A11D4F00039B X-Stat-Signature: d5ih39a19iew3tozcqymwuanpuwu8pj9 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=A8I+yiJO; dmarc=none; spf=pass (imf17.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.208.179 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1635776600-873695 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > >> Yes, you are just repeating what the diffs say. > >> > >> But it's still not good to have this function name doing something completely > >> different than its name indicates. > > > > I see, I can rename it to: 'set_page_recounted/get_page_recounted' ? > > > > What? No, that's not where I was going at all. The function is already > named set_page_refcounted(), and one of the problems I see is that your > changes turn it into something that most certainly does not > set_page_refounted(). Instead, this patch *increments* the refcount. > That is not the same thing. > > And then it uses a .config-sensitive assertion to "prevent" problems. > And by that I mean, the wording throughout this series seems to equate > VM_BUG_ON_PAGE() assertions with real assertions. They are only active, > however, in CONFIG_DEBUG_VM configurations, and provide no protection at > all for normal (most distros) users. That's something that the wording, > comments, and even design should be tweaked to account for. VM_BUG_ON and BUG_ON should be treated the same. Yes, they are config sensitive, but in both cases *BUG_ON() means that there is an unrecoverable problem that occured. The only difference between the two is that VM_BUG_ON() is not enabled when distros decide to reduce the size of their kernel and improve runtime performance by skipping some extra checking. There is no logical separation between VM_BUG_ON and BUG_ON, there is been a lengthy discussion about this: https://lore.kernel.org/lkml/CA+55aFy6a8BVWtqgeJKZuhU-CZFVZ3X90SdQ5z+NTDDsEOnpJA@mail.gmail.com/ "so *no*. VM_BUG_ON() is no less deadly than a regular BUG_ON(). It just allows some people to build smaller kernels, but apparently distro people would rather have debugging than save a few kB of RAM." Losing control of ref_count is an unrecoverable problem because it leads to security sensitive memory corruptions. It is better to crash the kernel when that happens instead of ending up with some pages mapped into the wrong address space. The races are tricky to spot, but set_page_count() is inherently dangerous, so I am removing it entirely and replacing it with safer operations which do the same thing. One example is this: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?h=7118fc29 > >>>> I understand where this patchset is going, but this intermediate step is > >>>> not a good move. > >>>> > >>>> Also, for the overall series, if you want to change from > >>>> "set_page_count()" to "inc_and_verify_val_equals_one()", then the way to > >>>> do that is *not* to depend solely on VM_BUG*() to verify. Instead, > >>>> return something like -EBUSY if incrementing the value results in a > >>>> surprise, and let the caller decide how to handle it. In set_page_refcounted() we already have: VM_BUG_ON_PAGE(page_ref_count(page), page); set_page_count(page, 1); I am pointing out that above code is racy: Between the check VM_BUG_ON_PAGE() check and unconditional set to 1 the value of page->_refcount can change. I am replacing it with an identical version of code that is not racy. There is no need to complicate the code by introducing new -EBUSY returns here, as it would reduce the fragility of this could even farther. > >>> Actually, -EBUSY would be OK if the problems were because we failed to I am not sure -EBUSY would be OK here, it means we had a race which we were not aware about, and which could have led to memory corruptions. > >>> modify refcount for some reason, but if we modified refcount and got > >>> an unexpected value (i.e underflow/overflow) we better report it right > >>> away instead of waiting for memory corruption to happen. > >>> > >> > >> Having the caller do the BUG() or VM_BUG*() is not a significant delay. I agree, however, helper functions exist to remove code duplications. If we must verify the assumption of set_page_refcounted() that non counted page is turned into a counted page, it is better to do it in one place than at every call site. We do it today in thus helper function, I do not see why we would change that.