linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Network filesystems and netmem
@ 2025-08-08 13:16 David Howells
  2025-08-08 17:57 ` Mina Almasry
  2025-08-08 20:16 ` David Howells
  0 siblings, 2 replies; 4+ messages in thread
From: David Howells @ 2025-08-08 13:16 UTC (permalink / raw)
  To: Mina Almasry
  Cc: dhowells, willy, hch, Jakub Kicinski, Eric Dumazet,
	Byungchul Park, netfs, netdev, linux-mm, linux-kernel

Hi Mina,

Apologies for not keeping up with the stuff I proposed, but I had to go and do
a load of bugfixing.  Anyway, that gave me time to think about the netmem
allocator and how *that* may be something network filesystems can make use of.
I particularly like the way it can do DMA/IOMMU mapping in bulk (at least, if
I understand it aright).

So what I'm thinking of is changing the network filesystems - at least the
ones I can - from using kmalloc() to allocate memory for protocol fragments to
using the netmem allocator.  However, I think this might need to be
parameterisable by:

 (1) The socket.  We might want to group allocations relating to the same
     socket or destined to route through the same NIC together.

 (2) The destination address.  Again, we might need to group by NIC.  For TCP
     sockets, this likely doesn't matter as a connected TCP socket already
     knows this, but for a UDP socket, you can set that in sendmsg() (and
     indeed AF_RXRPC does just that).

 (3) The lifetime.  On a crude level, I would provide a hint flag that
     indicates whether it may be retained for some time (e.g. rxrpc DATA
     packets or TCP data) or whether the data is something we aren't going to
     retain (e.g. rxrpc ACK packets) as we might want to group these
     differently.

So what I'm thinking of is creating a net core API that looks something like:

	#define NETMEM_HINT_UNRETAINED 0x1
	void *netmem_alloc(struct socket *sock, size_t len, unsigned int hints);
	void *netmem_free(void *mem);

though I'm tempted to make it:

	int netmem_alloc(struct socket *sock, size_t len, unsigned int hints,
			 struct bio_vec *bv);
	void netmem_free(struct bio_vec *bv);

to accommodate Christoph's plans for the future of bio_vec.

I'm going to leave the pin vs ref for direct I/O and splice issues and the
zerocopy-completion issues for later.

I'm using cifs as a testcase for this idea and now have it able to do
MSG_SPLICE_PAGES, though at the moment it's just grabbing pages and copying
data into them in the transport layer rather than using a fragment allocator
or netmem.  See:

https://lore.kernel.org/linux-fsdevel/20250806203705.2560493-4-dhowells@redhat.com/T/#t
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=cifs-experimental

David



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-08-08 23:28 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-08-08 13:16 Network filesystems and netmem David Howells
2025-08-08 17:57 ` Mina Almasry
2025-08-08 20:16 ` David Howells
2025-08-08 23:28   ` Mina Almasry

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox