From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CE67C433F5 for ; Wed, 27 Oct 2021 18:27:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2BCCD61039 for ; Wed, 27 Oct 2021 18:27:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2BCCD61039 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A5211940007; Wed, 27 Oct 2021 14:27:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9DA7D6B0072; Wed, 27 Oct 2021 14:27:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A20F940007; Wed, 27 Oct 2021 14:27:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id 5C51F6B0071 for ; Wed, 27 Oct 2021 14:27:40 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D252439495 for ; Wed, 27 Oct 2021 18:27:39 +0000 (UTC) X-FDA: 78743050638.16.EC9C939 Received: from mail-lj1-f179.google.com (mail-lj1-f179.google.com [209.85.208.179]) by imf10.hostedemail.com (Postfix) with ESMTP id 2915860019B0 for ; Wed, 27 Oct 2021 18:27:32 +0000 (UTC) Received: by mail-lj1-f179.google.com with SMTP id 17so3062012ljq.0 for ; Wed, 27 Oct 2021 11:27:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=eTPGGY5lRCDLBi6dC5IO32LUKxcN06wIGk1r7lIVLSo=; b=d3wjkgrfbqcygeWm6K/TtNmo32Mpi/IvWJLl5UmypGRAf01wECJBMI5oO6ZlCl7qfg kXxZKIMk1Hd3etghlMO0DF8OBeeQDZqxG+A97FEd+MR+G92uCtX7rGbLKD1qhsQW00VE hKRgadi2AztZburwCJd4Z1+tzPZJDoqhKJCrDO2EiSdQgweRWRn4fVv5+GX4swPMf7FH 8iQF+zzjdnwFD2o7TqZRU///noJZNQruC4Z4+YkOpzMF287IW/ePgpl8ewJBYjODSmR0 pluAlAFLYSCqFcxQsw935K3sqFD7E4hV/QSA5u0DJaCDiLZRcgpwlDdd2ADfdA231iih jsxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=eTPGGY5lRCDLBi6dC5IO32LUKxcN06wIGk1r7lIVLSo=; b=VSqpJ8BjVr9gRPXzmGt0lVLDGi+t2ET71J9Is8eg19JqRp6SexDTHU59pIBevNOqto sXa/N7yvcEGSQR4DrSH9hDNXz4B5uo7wkWqXrxh0n1aYzXyJDZ1Jrh45+ER0OFcLwitS i48rCLVuceJERIBqJEhh2IdTYi6VlIXKJoBrz6MtRnw1EtItlf8CriarpgR2tZBuvtXz ZLqHsIog2Q8LyVFRA9YbxuSbeM++e75rFX+Y7eh5tFbZufcp1M0vRQwj/+BAQboOhPsU dMEkJD0O/jLCrsizFStENffoVu7/rUtosZgJmMSjG1n6BQGCu087/5H7ts+w6Ff5r2xx DCKw== X-Gm-Message-State: AOAM532/CSXOlVqZVKe/f0cMhlbijIDgavPSmUtqdYdsG7GTdtqExwgt BlwyV1clQRzamg5EyzWUIvsj6AeHCL50d+Vyr2cUog== X-Google-Smtp-Source: ABdhPJzVrqJVm8ugC/NpTWWt0IMrYAXeu8WHnFF1TrXtg37Te+R3tSJ+rO1eSFGnjAMpx1n4+0JJslJTRNvu3vH6O7E= X-Received: by 2002:a2e:810c:: with SMTP id d12mr34769148ljg.177.1635359258205; Wed, 27 Oct 2021 11:27:38 -0700 (PDT) MIME-Version: 1.0 References: <20211026173822.502506-1-pasha.tatashin@soleen.com> <20211026173822.502506-4-pasha.tatashin@soleen.com> <7b131cb1-68d8-6746-f9c1-2b01d4838869@nvidia.com> In-Reply-To: From: Pasha Tatashin Date: Wed, 27 Oct 2021 14:27:01 -0400 Message-ID: Subject: Re: [RFC 3/8] mm: Avoid using set_page_count() in set_page_recounted() To: John Hubbard Cc: LKML , linux-mm , linux-m68k@lists.linux-m68k.org, Anshuman Khandual , Matthew Wilcox , Andrew Morton , william.kucharski@oracle.com, Mike Kravetz , Vlastimil Babka , Geert Uytterhoeven , schmitzmic@gmail.com, Steven Rostedt , Ingo Molnar , Johannes Weiner , Roman Gushchin , Muchun Song , weixugc@google.com, Greg Thelen Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: y8wes7au9guhnxffs4wis1a35wb1m4w4 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=d3wjkgrf; spf=pass (imf10.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.208.179 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2915860019B0 X-HE-Tag: 1635359252-916275 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Oct 27, 2021 at 1:12 AM John Hubbard wrote: > > On 10/26/21 11:21, Pasha Tatashin wrote: > > It must return the same thing, if it does not we have a bug in our > > kernel which may lead to memory corruptions and security holes. > > > > So today we have this: > > VM_BUG_ON_PAGE(page_ref_count(page), page); -> check ref_count is 0 > > < What if something modified here? Hmm..> > > set_page_count(page, 1); -> Yet we reset it to 1. > > > > With my proposed change: > > VM_BUG_ON_PAGE(page_ref_count(page), page); -> check ref_count is 0 > > refcnt = page_ref_inc_return(page); -> ref_count better be 1. > > VM_BUG_ON_PAGE(refcnt != 1, page); -> Verify that it is 1. > > > > Yes, you are just repeating what the diffs say. > > But it's still not good to have this function name doing something completely > different than its name indicates. I see, I can rename it to: 'set_page_recounted/get_page_recounted' ? > > >> > >> I understand where this patchset is going, but this intermediate step is > >> not a good move. > >> > >> Also, for the overall series, if you want to change from > >> "set_page_count()" to "inc_and_verify_val_equals_one()", then the way to > >> do that is *not* to depend solely on VM_BUG*() to verify. Instead, > >> return something like -EBUSY if incrementing the value results in a > >> surprise, and let the caller decide how to handle it. > > > > Actually, -EBUSY would be OK if the problems were because we failed to > > modify refcount for some reason, but if we modified refcount and got > > an unexpected value (i.e underflow/overflow) we better report it right > > away instead of waiting for memory corruption to happen. > > > > Having the caller do the BUG() or VM_BUG*() is not a significant delay. We cannot guarantee that new callers in the future will check return values, the idea behind this work is to ensure that we are always protected from refcount underflow/overflow and invalid refcount modifications by set_refcount. Pasha