From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65014C388F7 for ; Fri, 13 Nov 2020 12:09:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AC5D52224B for ; Fri, 13 Nov 2020 12:09:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FL58M8ex" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AC5D52224B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F1F956B00D8; Fri, 13 Nov 2020 07:09:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EA94A6B00DA; Fri, 13 Nov 2020 07:09:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D4B406B00DB; Fri, 13 Nov 2020 07:09:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A2D3E6B00D8 for ; Fri, 13 Nov 2020 07:09:01 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3470E3628 for ; Fri, 13 Nov 2020 12:09:01 +0000 (UTC) X-FDA: 77479274082.12.skate67_3c0d1ea2730e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 0D4A11801850C for ; Fri, 13 Nov 2020 12:09:01 +0000 (UTC) X-HE-Tag: skate67_3c0d1ea2730e X-Filterd-Recvd-Size: 11319 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 12:09:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605269339; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DVrKEND7O2aJWKvKBSTubFCE0ZMQrPJsJOXMQkH0lJM=; b=FL58M8ex3UNVRlK06ek7FfdWvWVDNd+u6nmcYK307Xmt84LuSEtWB6Cl+0RYqSy+Jhkb98 rmfUT78NABoURtyBem83k2scVxgAM9n5h3YYLoUk5EsiyvW4uE/DNjROObnfMoENVx8zFr rtnTF3zdHmnyqfpJZD2UhYjzJbc69i8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-567-H7FDoG9mNpiDnFXzs2acwA-1; Fri, 13 Nov 2020 07:08:55 -0500 X-MC-Unique: H7FDoG9mNpiDnFXzs2acwA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BE1F4835B47; Fri, 13 Nov 2020 12:08:53 +0000 (UTC) Received: from [10.36.114.34] (ovpn-114-34.ams2.redhat.com [10.36.114.34]) by smtp.corp.redhat.com (Postfix) with ESMTP id BF3695C1C7; Fri, 13 Nov 2020 12:08:51 +0000 (UTC) Subject: Re: [PATCH v3 2/5] mm, page_poison: use static key more efficiently To: Vlastimil Babka , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Alexander Potapenko , Kees Cook , Michal Hocko , Mateusz Nosek , Laura Abbott References: <20201113104033.22907-1-vbabka@suse.cz> <20201113104033.22907-3-vbabka@suse.cz> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <4fea3200-70f8-620a-ec87-735882a530a5@redhat.com> Date: Fri, 13 Nov 2020 13:08:50 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: <20201113104033.22907-3-vbabka@suse.cz> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 13.11.20 11:40, Vlastimil Babka wrote: > Commit 11c9c7edae06 ("mm/page_poison.c: replace bool variable with static key") > changed page_poisoning_enabled() to a static key check. However, the function > is not inlined, so each check still involves a function call with overhead not > eliminated when page poisoning is disabled. > > Analogically to how debug_pagealloc is handled, this patch converts > page_poisoning_enabled() back to boolean check, and introduces > page_poisoning_enabled_static() for fast paths. Both functions are inlined. > > The function kernel_poison_pages() is also called unconditionally and does > the static key check inside. Remove it from there and put it to callers. Also > split it to two functions kernel_poison_pages() and kernel_unpoison_pages() > instead of the confusing bool parameter. > > Also optimize the check that enables page poisoning instead of debug_pagealloc > for architectures without proper debug_pagealloc support. Move the check to > init_mem_debugging_and_hardening() to enable a single static key instead of > having two static branches in page_poisoning_enabled_static(). > > Signed-off-by: Vlastimil Babka > --- > drivers/virtio/virtio_balloon.c | 2 +- > include/linux/mm.h | 33 +++++++++++++++++--- > mm/page_alloc.c | 18 +++++++++-- > mm/page_poison.c | 53 +++++---------------------------- > 4 files changed, 52 insertions(+), 54 deletions(-) > > diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c > index 481611c09dae..e53faed6ba93 100644 > --- a/drivers/virtio/virtio_balloon.c > +++ b/drivers/virtio/virtio_balloon.c > @@ -1116,7 +1116,7 @@ static int virtballoon_validate(struct virtio_device *vdev) > */ > if (!want_init_on_free() && > (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY) || > - !page_poisoning_enabled())) > + !page_poisoning_enabled_static())) > __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON); > else if (!virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) > __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_REPORTING); > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 82ab5c894d94..5ab5358be2b3 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2862,12 +2862,37 @@ extern int apply_to_existing_page_range(struct mm_struct *mm, > > extern void init_mem_debugging_and_hardening(void); > #ifdef CONFIG_PAGE_POISONING > -extern bool page_poisoning_enabled(void); > -extern void kernel_poison_pages(struct page *page, int numpages, int enable); > +extern void __kernel_poison_pages(struct page *page, int numpages); > +extern void __kernel_unpoison_pages(struct page *page, int numpages); > +extern bool _page_poisoning_enabled_early; > +DECLARE_STATIC_KEY_FALSE(_page_poisoning_enabled); > +static inline bool page_poisoning_enabled(void) > +{ > + return _page_poisoning_enabled_early; > +} > +/* > + * For use in fast paths after init_mem_debugging() has run, or when a > + * false negative result is not harmful when called too early. > + */ > +static inline bool page_poisoning_enabled_static(void) > +{ > + return static_branch_unlikely(&_page_poisoning_enabled); > +} > +static inline void kernel_poison_pages(struct page *page, int numpages) > +{ > + if (page_poisoning_enabled_static()) > + __kernel_poison_pages(page, numpages); > +} > +static inline void kernel_unpoison_pages(struct page *page, int numpages) > +{ > + if (page_poisoning_enabled_static()) > + __kernel_unpoison_pages(page, numpages); > +} > #else > static inline bool page_poisoning_enabled(void) { return false; } > -static inline void kernel_poison_pages(struct page *page, int numpages, > - int enable) { } > +static inline bool page_poisoning_enabled_static(void) { return false; } > +static inline void kernel_poison_pages(struct page *page, int numpages) { } > +static inline void kernel_unpoison_pages(struct page *page, int numpages) { } > #endif > > DECLARE_STATIC_KEY_FALSE(init_on_alloc); > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 567060c2ad83..cd966829bed3 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -775,6 +775,17 @@ void init_mem_debugging_and_hardening(void) > static_branch_enable(&init_on_free); > } > > +#ifdef CONFIG_PAGE_POISONING > + /* > + * Page poisoning is debug page alloc for some arches. If > + * either of those options are enabled, enable poisoning. > + */ > + if (page_poisoning_enabled() || > + (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) && > + debug_pagealloc_enabled())) > + static_branch_enable(&_page_poisoning_enabled); > +#endif > + > #ifdef CONFIG_DEBUG_PAGEALLOC > if (!debug_pagealloc_enabled()) > return; > @@ -1262,7 +1273,8 @@ static __always_inline bool free_pages_prepare(struct page *page, > if (want_init_on_free()) > kernel_init_free_pages(page, 1 << order); > > - kernel_poison_pages(page, 1 << order, 0); > + kernel_poison_pages(page, 1 << order); > + > /* > * arch_free_page() can make the page's contents inaccessible. s390 > * does this. So nothing which can access the page's contents should > @@ -2217,7 +2229,7 @@ static inline int check_new_page(struct page *page) > static inline bool free_pages_prezeroed(void) > { > return (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) && > - page_poisoning_enabled()) || want_init_on_free(); > + page_poisoning_enabled_static()) || want_init_on_free(); > } > > #ifdef CONFIG_DEBUG_VM > @@ -2279,7 +2291,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order, > arch_alloc_page(page, order); > debug_pagealloc_map_pages(page, 1 << order); > kasan_alloc_pages(page, order); > - kernel_poison_pages(page, 1 << order, 1); > + kernel_unpoison_pages(page, 1 << order); > set_page_owner(page, order, gfp_flags); > } > > diff --git a/mm/page_poison.c b/mm/page_poison.c > index e6c994af7518..0d899a01d107 100644 > --- a/mm/page_poison.c > +++ b/mm/page_poison.c > @@ -8,45 +8,17 @@ > #include > #include > > -static DEFINE_STATIC_KEY_FALSE_RO(want_page_poisoning); > +bool _page_poisoning_enabled_early; > +EXPORT_SYMBOL(_page_poisoning_enabled_early); > +DEFINE_STATIC_KEY_FALSE(_page_poisoning_enabled); > +EXPORT_SYMBOL(_page_poisoning_enabled); > > static int __init early_page_poison_param(char *buf) > { > - int ret; > - bool tmp; > - > - ret = strtobool(buf, &tmp); > - if (ret) > - return ret; > - > - if (tmp) > - static_branch_enable(&want_page_poisoning); > - else > - static_branch_disable(&want_page_poisoning); > - > - return 0; > + return kstrtobool(buf, &_page_poisoning_enabled_early); > } > early_param("page_poison", early_page_poison_param); > > -/** > - * page_poisoning_enabled - check if page poisoning is enabled > - * > - * Return true if page poisoning is enabled, or false if not. > - */ > -bool page_poisoning_enabled(void) > -{ > - /* > - * Assumes that debug_pagealloc_enabled is set before > - * memblock_free_all. > - * Page poisoning is debug page alloc for some arches. If > - * either of those options are enabled, enable poisoning. > - */ > - return (static_branch_unlikely(&want_page_poisoning) || > - (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) && > - debug_pagealloc_enabled())); > -} > -EXPORT_SYMBOL_GPL(page_poisoning_enabled); > - > static void poison_page(struct page *page) > { > void *addr = kmap_atomic(page); > @@ -58,7 +30,7 @@ static void poison_page(struct page *page) > kunmap_atomic(addr); > } > > -static void poison_pages(struct page *page, int n) > +void __kernel_poison_pages(struct page *page, int n) > { > int i; > > @@ -117,7 +89,7 @@ static void unpoison_page(struct page *page) > kunmap_atomic(addr); > } > > -static void unpoison_pages(struct page *page, int n) > +void __kernel_unpoison_pages(struct page *page, int n) > { > int i; > > @@ -125,17 +97,6 @@ static void unpoison_pages(struct page *page, int n) > unpoison_page(page + i); > } > > -void kernel_poison_pages(struct page *page, int numpages, int enable) > -{ > - if (!page_poisoning_enabled()) > - return; > - > - if (enable) > - unpoison_pages(page, numpages); > - else > - poison_pages(page, numpages); > -} > - > #ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC > void __kernel_map_pages(struct page *page, int numpages, int enable) > { > Reviewed-by: David Hildenbrand -- Thanks, David / dhildenb