From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB4B9C433C1 for ; Tue, 23 Mar 2021 14:34:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3C971619B1 for ; Tue, 23 Mar 2021 14:34:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3C971619B1 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C177B6B01AA; Tue, 23 Mar 2021 10:34:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BEE7A6B01AE; Tue, 23 Mar 2021 10:34:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB6A16B01B1; Tue, 23 Mar 2021 10:34:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id 8EFEB6B01AA for ; Tue, 23 Mar 2021 10:34:30 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4433287CA for ; Tue, 23 Mar 2021 14:34:30 +0000 (UTC) X-FDA: 77951384700.19.B5E0728 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf17.hostedemail.com (Postfix) with ESMTP id 143084080F74 for ; Tue, 23 Mar 2021 14:34:22 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1616510061; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=U6AYmAE3axZU7Y0KUT99DSxo5RPJIf5s+7Dy88I1Wdg=; b=vUoROcuhUJ8TVeKb7PORfNjQR1dmz/iSaXTllYT7iJQTfH0qjQLXrU+7jK2uLB7GR2hoFG m+8V9mml9J2vpquf2lzURMMvfwwKcmQCcNKfWXvUV0odfTIdwUgQSOhBILiplh1EFx4ucw ppXcwU9GX3xYTiamctYDF0mhMZc1NBQ= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 1ECE0ACBF; Tue, 23 Mar 2021 14:34:21 +0000 (UTC) Date: Tue, 23 Mar 2021 15:34:20 +0100 From: Michal Hocko To: Johannes Weiner Cc: Arjun Roy , Arjun Roy , Andrew Morton , David Miller , netdev , Linux Kernel Mailing List , Cgroups , Linux MM , Shakeel Butt , Eric Dumazet , Soheil Hassas Yeganeh , Jakub Kicinski , Yang Shi , Roman Gushchin Subject: Re: [mm, net-next v2] mm: net: memcg accounting for TCP rx zerocopy Message-ID: References: <20210316041645.144249-1-arjunroy.kdev@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 143084080F74 X-Stat-Signature: fihkejdyrepng5onkirshw4yd3fr9rhg Received-SPF: none (suse.com>: No applicable sender policy available) receiver=imf17; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616510062-856719 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 17-03-21 18:12:55, Johannes Weiner wrote: [...] > Here is an idea of how it could work: > > struct page already has > > struct { /* page_pool used by netstack */ > /** > * @dma_addr: might require a 64-bit value even on > * 32-bit architectures. > */ > dma_addr_t dma_addr; > }; > > and as you can see from its union neighbors, there is quite a bit more > room to store private data necessary for the page pool. > > When a page's refcount hits zero and it's a networking page, we can > feed it back to the page pool instead of the page allocator. > > From a first look, we should be able to use the PG_owner_priv_1 page > flag for network pages (see how this flag is overloaded, we can add a > PG_network alias). With this, we can identify the page in __put_page() > and __release_page(). These functions are already aware of different > types of pages and do their respective cleanup handling. We can > similarly make network a first-class citizen and hand pages back to > the network allocator from in there. For compound pages we have a concept of destructors. Maybe we can extend that for order-0 pages as well. The struct page is heavily packed and compound_dtor shares the storage without other metadata int pages; /* 16 4 */ unsigned char compound_dtor; /* 16 1 */ atomic_t hpage_pinned_refcount; /* 16 4 */ pgtable_t pmd_huge_pte; /* 16 8 */ void * zone_device_data; /* 16 8 */ But none of those should really require to be valid when a page is freed unless I am missing something. It would really require to check their users whether they can leave the state behind. But if we can establish a contract that compound_dtor can be always valid when a page is freed this would be really a nice and useful abstraction because you wouldn't have to care about the specific type of page. But maybe I am just overlooking the real complexity there. -- Michal Hocko SUSE Labs