From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0D0DC4332F for ; Tue, 8 Nov 2022 18:42:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F37EF6B0073; Tue, 8 Nov 2022 13:42:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EC1A28E0002; Tue, 8 Nov 2022 13:42:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB0148E0001; Tue, 8 Nov 2022 13:42:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CCB696B0073 for ; Tue, 8 Nov 2022 13:42:10 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A9CB81A081F for ; Tue, 8 Nov 2022 18:42:10 +0000 (UTC) X-FDA: 80111144820.03.D147E09 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf03.hostedemail.com (Postfix) with ESMTP id F0F5D20004 for ; Tue, 8 Nov 2022 18:42:09 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 61257B81C00 for ; Tue, 8 Nov 2022 18:42:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 175A3C4347C for ; Tue, 8 Nov 2022 18:42:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1667932927; bh=VJAwuyxsGnX9T/CuJoUiv0JIp/8sdszXqnZqCabXHuc=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=m9hcOdX03SD97m5xalRYGfH972XJNx+aNMMHRrkdB/51qSLZfzJt+JOBMu68rfDhe Jait5MgBls7BNzGouv67Fb0xwNCWwDMlXiUpu5EilRYrnJwY6qm7aiwh6OJbfXjNoc jIWS+ablvshWmUSX2UK6LDJtdJNhf2wwpWh6ctJOiIn/uLNep+9kqlj6+asDX2EvCM qupW4eZ/GTuR7TtoaG4FUOlBaQhYtWNnNO8BbnTvXS+i83LKWnW61lFVuMVUNHo9ib fETHMLFZ9u7NbLKPT4Gt5cwmet+a8aapeY3gMl/2iDPq5TuSF8fI+b9Ww8qdgUqaxX WeOdz0h3DP1pA== Received: by mail-ed1-f48.google.com with SMTP id l11so23875693edb.4 for ; Tue, 08 Nov 2022 10:42:06 -0800 (PST) X-Gm-Message-State: ACrzQf07FWQWCeL6cRkKwOOmKjVuTckCQNKGbKdmczroE8KCsO+QkZvk ++qEi+wybbh7X9mlYHaY1PEgVAavxBxhA5uI9rs= X-Google-Smtp-Source: AMsMyM7kEGrvOjD+TOh9L6vFVQJHwuRPR01aB5LuIqreLOn2To3UKuyZ6w3bXxSGwhz0ndaxxDOAJjDYy0ie83obdHk= X-Received: by 2002:a05:6402:1690:b0:45f:d702:9919 with SMTP id a16-20020a056402169000b0045fd7029919mr57214673edv.127.1667932925251; Tue, 08 Nov 2022 10:42:05 -0800 (PST) MIME-Version: 1.0 References: <20221107223921.3451913-1-song@kernel.org> In-Reply-To: From: Song Liu Date: Tue, 8 Nov 2022 10:41:53 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH bpf-next v2 0/5] execmem_alloc for BPF programs To: Mike Rapoport Cc: bpf@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, x86@kernel.org, peterz@infradead.org, hch@lst.de, rick.p.edgecombe@intel.com, aaron.lu@intel.com, mcgrof@kernel.org Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667932930; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8apNnnfw946oIF3KLf1wUCoDDwoT6gTG+w/qhqyDYpU=; b=4aJDCrzsiofdQ/4l4HmZuoKkGzlyf1h0URMmo0IJZxOV5LUgeNOuakgncaIWytNTw/NCsS hkILVkdrzgDXb3PYcZ3XCd+N9brEa2LMkFIBIZLsX/kHoK+enk/hSVc74tIK8Fd39UP6fJ sOT9L8TD1upI2s1Mg/PA0qj40uGFXEM= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=m9hcOdX0; spf=pass (imf03.hostedemail.com: domain of song@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=song@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667932930; a=rsa-sha256; cv=none; b=5HvQ0IYN0XeihZTa/udhmwWNEWg8mNL3f2cyLFHY+RAnEFxhY7ONBe3LPUjotDBNmDihWu yM6rxPXunO9lN8ax9W8lN7JMH07cnP7AFfJO/7+h8+eE7aqmJw/5D3rdlKKIdLWSsmgnr/ qKO4Fl+vFioneLJu8b+p+2uqqueQ5U8= X-Rspam-User: X-Stat-Signature: fxscdtue3jgr877o9hay94w3ystuxy9z X-Rspamd-Queue-Id: F0F5D20004 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=m9hcOdX0; spf=pass (imf03.hostedemail.com: domain of song@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=song@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam07 X-HE-Tag: 1667932929-752446 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Nov 8, 2022 at 3:27 AM Mike Rapoport wrote: > > Hi Song, > > On Mon, Nov 07, 2022 at 02:39:16PM -0800, Song Liu wrote: > > This patchset tries to address the following issues: > > > > 1. Direct map fragmentation > > > > On x86, STRICT_*_RWX requires the direct map of any RO+X memory to be also > > RO+X. These set_memory_* calls cause 1GB page table entries to be split > > into 2MB and 4kB ones. This fragmentation in direct map results in bigger > > and slower page table, and pressure for both instruction and data TLB. > > > > Our previous work in bpf_prog_pack tries to address this issue from BPF > > program side. Based on the experiments by Aaron Lu [4], bpf_prog_pack has > > greatly reduced direct map fragmentation from BPF programs. > > Usage of set_memory_* APIs with memory allocated from vmalloc/modules > virtual range does not change the direct map, but only updates the > permissions in vmalloc range. The direct map splits occur in > vm_remove_mappings() when the memory is *freed*. > > That said, both bpf_prog_pack and these patches do reduce the > fragmentation, but this happens because the memory is freed to the system > in 2M chunks and there are no splits of 2M pages. Besides, since the same > 2M page used for many BPF programs there should be way less vfree() calls. > > > 2. iTLB pressure from BPF program > > > > Dynamic kernel text such as modules and BPF programs (even with current > > bpf_prog_pack) use 4kB pages on x86, when the total size of modules and > > BPF program is big, we can see visible performance drop caused by high > > iTLB miss rate. > > Like Luis mentioned several times already, it would be nice to see numbers. > > > 3. TLB shootdown for short-living BPF programs > > > > Before bpf_prog_pack loading and unloading BPF programs requires global > > TLB shootdown. This patchset (and bpf_prog_pack) replaces it with a local > > TLB flush. > > > > 4. Reduce memory usage by BPF programs (in some cases) > > > > Most BPF programs and various trampolines are small, and they often > > occupies a whole page. From a random server in our fleet, 50% of the > > loaded BPF programs are less than 500 byte in size, and 75% of them are > > less than 2kB in size. Allowing these BPF programs to share 2MB pages > > would yield some memory saving for systems with many BPF programs. For > > systems with only small number of BPF programs, this patch may waste a > > little memory by allocating one 2MB page, but using only part of it. > > I'm not convinced there are memory savings here. Unless you have hundreds > of BPF programs, most of 2M page will be wasted, won't it? > So for systems that have moderate use of BPF most of the 2M page will be > unused, right? There will be some memory waste in such cases. But it will get better with: 1) With 4/5 and 5/5, BPF programs will share this 2MB page with kernel .text section (_stext to _etext); 2) modules, ftrace, kprobe will also share this 2MB page; 3) There are bigger BPF programs in many use cases. > > > Based on our experiments [5], we measured 0.5% performance improvement > > from bpf_prog_pack. This patchset further boosts the improvement to 0.7%. > > The difference is because bpf_prog_pack uses 512x 4kB pages instead of > > 1x 2MB page, bpf_prog_pack as-is doesn't resolve #2 above. > > > > This patchset replaces bpf_prog_pack with a better API and makes it > > available for other dynamic kernel text, such as modules, ftrace, kprobe. > > The proposed execmem_alloc() looks to me very much tailored for x86 to be > used as a replacement for module_alloc(). Some architectures have > module_alloc() that is quite different from the default or x86 version, so > I'd expect at least some explanation how modules etc can use execmem_ APIs > without breaking !x86 architectures. > > > This set enables bpf programs and bpf dispatchers to share huge pages with > > new API: > > execmem_alloc() > > execmem_alloc() > > execmem_fill() > > > > The idea is similar to Peter's suggestion in [1]. > > > > execmem_alloc() manages a set of PMD_SIZE RO+X memory, and allocates these > > memory to its users. execmem_alloc() is used to free memory allocated by > > execmem_alloc(). execmem_fill() is used to update memory allocated by > > execmem_alloc(). > > > > Memory allocated by execmem_alloc() is RO+X, so this doesnot violate W^X. > > The caller has to update the content with text_poke like mechanism. > > Specifically, execmem_fill() is provided to update memory allocated by > > execmem_alloc(). execmem_fill() also makes sure the update stays in the > > boundary of one chunk allocated by execmem_alloc(). Please refer to patch > > 1/5 for more details of > > Unless I'm mistaken, a failure to allocate PMD_SIZE page will fail text > allocation altogether. That means that if somebody tries to load a BFP > program on a busy long lived system, they are quite likely to fail because > high order free lists might be already exhausted although there is still > plenty of free memory. > > Did you consider a fallback for small pages if the high order allocation > fails? I think __vmalloc_node_range() already has the fallback mechanism. (the end of the function). Thanks, Song