From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43FD9C43462 for ; Fri, 14 May 2021 08:43:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BAA4861287 for ; Fri, 14 May 2021 08:43:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BAA4861287 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3057A6B0036; Fri, 14 May 2021 04:43:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 267C26B006E; Fri, 14 May 2021 04:43:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 044956B0070; Fri, 14 May 2021 04:43:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0114.hostedemail.com [216.40.44.114]) by kanga.kvack.org (Postfix) with ESMTP id C2B246B0036 for ; Fri, 14 May 2021 04:43:35 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 50FBEA8D6 for ; Fri, 14 May 2021 08:43:35 +0000 (UTC) X-FDA: 78139197990.03.9E2EBDB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf02.hostedemail.com (Postfix) with ESMTP id E2A0840002C1 for ; Fri, 14 May 2021 08:43:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1620981814; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lsAavQGtZZQfxxuNta+2PaMZ+gKpW3kyTUx2+4IIAWk=; b=YxlcmwxVHOceWjTKAGrIUdEot99VucH4vxjA3qjNsFknrZE7IXHazHM13viWykVqBYy5X5 VM71SW5cAV+9j5Ym6YI6WGbtkINZy3T2wJUjOcktgu6sJWqNVprSe1SzUqtQkob8aT1KH+ jmNR+YduELejNqcSBKUmHafKfcTRvj0= Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-194-l68V032IPIeGexIhe8nRKg-1; Fri, 14 May 2021 04:43:33 -0400 X-MC-Unique: l68V032IPIeGexIhe8nRKg-1 Received: by mail-ej1-f72.google.com with SMTP id v10-20020a170906292ab02903d18e1be8f8so2005523ejd.13 for ; Fri, 14 May 2021 01:43:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:cc:references:from:organization:subject :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=lsAavQGtZZQfxxuNta+2PaMZ+gKpW3kyTUx2+4IIAWk=; b=G+KuoVTZAzF+/4q1L3aJz1SKkOj9kNUsbu+Fr/9+D4VQFVaIngOhjLlX696K7PcDUU LzGn8GChLvCRdgPIu7Qdymz+RZ+E5PkpCn22sQl17PALgw7zxUUr0zDKc3J7QontcJRH mYCOB9lXupL/HW+E8ZHh3U73/OUJAKLe5960M+61ux1nO8kQVXiUJP8+1VcXJSXAocct JFffyU6nBwJd0U95Wq8ErPJZCmZpwr+B907cPrz9gPybPQK5Kokd7EJBdL8u5eezYD03 rXrWG/FdpFPk5Rg+ELdfaejwQWEuk7JDAG9BRKb9AsNn8oSqV6txUuBCQu7Gwz/fSqQh Fcxg== X-Gm-Message-State: AOAM530OwBzpTNeU3unLCVF6NgkwGDf3PGc+pwtPPWmdaqrrQSeIQY3G wom431hNZAEPeVNfg3Tr8DYv/gM2TCWHf1rLTJ2QpOdOg1lZKMnTqljLdZgp9qyqRxiOcXk26mG vhvnL/9QRd/w= X-Received: by 2002:a50:ed0c:: with SMTP id j12mr54641182eds.12.1620981811646; Fri, 14 May 2021 01:43:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJypI4ge2hxqi/JfrvWJM2sA3w80IXDC+WmQ5wQdEdZZtqA+keJm9IEpbNTASfWRrXFPNSGQBQ== X-Received: by 2002:a50:ed0c:: with SMTP id j12mr54641100eds.12.1620981811327; Fri, 14 May 2021 01:43:31 -0700 (PDT) Received: from [192.168.3.132] (p5b0c6501.dip0.t-ipconnect.de. [91.12.101.1]) by smtp.gmail.com with ESMTPSA id k12sm3969468edo.50.2021.05.14.01.43.29 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 14 May 2021 01:43:31 -0700 (PDT) To: Mike Rapoport , Andrew Morton Cc: Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , Elena Reshetova , "H. Peter Anvin" , Hagen Paul Pfeifer , Ingo Molnar , James Bottomley , Kees Cook , "Kirill A. Shutemov" , Matthew Wilcox , Matthew Garrett , Mark Rutland , Michal Hocko , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , "Rafael J. Wysocki" , Rick Edgecombe , Roman Gushchin , Shakeel Butt , Shuah Khan , Thomas Gleixner , Tycho Andersen , Will Deacon , Yury Norov , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org References: <20210513184734.29317-1-rppt@kernel.org> <20210513184734.29317-4-rppt@kernel.org> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v19 3/8] set_memory: allow set_direct_map_*_noflush() for multiple pages Message-ID: <858e5561-bc7d-4ce1-5cb8-3c333199d52a@redhat.com> Date: Fri, 14 May 2021 10:43:29 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <20210513184734.29317-4-rppt@kernel.org> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YxlcmwxV; spf=none (imf02.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: E2A0840002C1 X-Stat-Signature: nfy5aeku1a6neoao7ftuerh5fjbze16e X-HE-Tag: 1620981813-548156 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 13.05.21 20:47, Mike Rapoport wrote: > From: Mike Rapoport >=20 > The underlying implementations of set_direct_map_invalid_noflush() and > set_direct_map_default_noflush() allow updating multiple contiguous pag= es > at once. >=20 > Add numpages parameter to set_direct_map_*_noflush() to expose this > ability with these APIs. >=20 [...] Finally doing some in-depth review, sorry for not having a detailed look=20 earlier. > =20 > -int set_direct_map_invalid_noflush(struct page *page) > +int set_direct_map_invalid_noflush(struct page *page, int numpages) > { > struct page_change_data data =3D { > .set_mask =3D __pgprot(0), > .clear_mask =3D __pgprot(PTE_VALID), > }; > + unsigned long size =3D PAGE_SIZE * numpages; > =20 Nit: I'd have made this const and added an early exit for !numpages. But=20 whatever you prefer. > if (!debug_pagealloc_enabled() && !rodata_full) > return 0; > =20 > return apply_to_page_range(&init_mm, > (unsigned long)page_address(page), > - PAGE_SIZE, change_page_range, &data); > + size, change_page_range, &data); > } > =20 > -int set_direct_map_default_noflush(struct page *page) > +int set_direct_map_default_noflush(struct page *page, int numpages) > { > struct page_change_data data =3D { > .set_mask =3D __pgprot(PTE_VALID | PTE_WRITE), > .clear_mask =3D __pgprot(PTE_RDONLY), > }; > + unsigned long size =3D PAGE_SIZE * numpages; > =20 Nit: dito > if (!debug_pagealloc_enabled() && !rodata_full) > return 0; > =20 > return apply_to_page_range(&init_mm, > (unsigned long)page_address(page), > - PAGE_SIZE, change_page_range, &data); > + size, change_page_range, &data); > } > =20 [...] > extern int kernel_set_to_readonly; > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.= c > index 156cd235659f..15a55d6e9cec 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -2192,14 +2192,14 @@ static int __set_pages_np(struct page *page, in= t numpages) > return __change_page_attr_set_clr(&cpa, 0); > } > =20 > -int set_direct_map_invalid_noflush(struct page *page) > +int set_direct_map_invalid_noflush(struct page *page, int numpages) > { > - return __set_pages_np(page, 1); > + return __set_pages_np(page, numpages); > } > =20 > -int set_direct_map_default_noflush(struct page *page) > +int set_direct_map_default_noflush(struct page *page, int numpages) > { > - return __set_pages_p(page, 1); > + return __set_pages_p(page, numpages); > } > =20 So, what happens if we succeeded setting=20 set_direct_map_invalid_noflush() for some pages but fail when having to=20 split a large mapping? Did I miss something or would the current code not undo what it=20 partially did? Or do we simply not care? I guess to handle this cleanly we would either have to catch all error=20 cases first (esp. splitting large mappings) before actually performing=20 the set to invalid, or have some recovery code in place if possible. AFAIKs, your patch #5 right now only calls it with 1 page, do we need=20 this change at all? Feels like a leftover from older versions to me=20 where we could have had more than a single page. --=20 Thanks, David / dhildenb