From: Leon Romanovsky <leon@kernel.org>
To: "D. Wythe" <alibuda@linux.alibaba.com>
Cc: Uladzislau Rezki <urezki@gmail.com>,
"David S. Miller" <davem@davemloft.net>,
Andrew Morton <akpm@linux-foundation.org>,
Dust Li <dust.li@linux.alibaba.com>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Sidraya Jayagond <sidraya@linux.ibm.com>,
Wenjia Zhang <wenjia@linux.ibm.com>,
Mahanta Jambigi <mjambigi@linux.ibm.com>,
Simon Horman <horms@kernel.org>,
Tony Lu <tonylu@linux.alibaba.com>,
Wen Gu <guwen@linux.alibaba.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org,
netdev@vger.kernel.org, oliver.yang@linux.alibaba.com
Subject: Re: [PATCH net-next 2/3] mm: vmalloc: export find_vm_area()
Date: Thu, 29 Jan 2026 14:22:02 +0200 [thread overview]
Message-ID: <20260129122202.GF10992@unreal> (raw)
In-Reply-To: <20260129110323.GA80118@j66a10360.sqa.eu95>
On Thu, Jan 29, 2026 at 07:03:23PM +0800, D. Wythe wrote:
> On Wed, Jan 28, 2026 at 03:49:34PM +0200, Leon Romanovsky wrote:
> > On Wed, Jan 28, 2026 at 08:44:04PM +0800, D. Wythe wrote:
> > > On Wed, Jan 28, 2026 at 01:13:46PM +0200, Leon Romanovsky wrote:
> > > > On Wed, Jan 28, 2026 at 11:45:58AM +0800, D. Wythe wrote:
> > > > > On Tue, Jan 27, 2026 at 03:34:17PM +0200, Leon Romanovsky wrote:
> > > > > > On Sat, Jan 24, 2026 at 10:57:54PM +0800, D. Wythe wrote:
> > > > > > > On Sat, Jan 24, 2026 at 11:48:59AM +0100, Uladzislau Rezki wrote:
> > > > > > > > Hello, D. Wythe!
> > > > > > > >
> > > > > > > > > On Fri, Jan 23, 2026 at 07:55:17PM +0100, Uladzislau Rezki wrote:
> > > > > > > > > > On Fri, Jan 23, 2026 at 04:23:48PM +0800, D. Wythe wrote:
> > > > > > > > > > > find_vm_area() provides a way to find the vm_struct associated with a
> > > > > > > > > > > virtual address. Export this symbol to modules so that modularized
> > > > > > > > > > > subsystems can perform lookups on vmalloc addresses.
> > > > > > > > > > >
> > > > > > > > > > > Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
> > > > > > > > > > > ---
> > > > > > > > > > > mm/vmalloc.c | 1 +
> > > > > > > > > > > 1 file changed, 1 insertion(+)
> > > > > > > > > > >
> > > > > > > > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > > > > > > > > > index ecbac900c35f..3eb9fe761c34 100644
> > > > > > > > > > > --- a/mm/vmalloc.c
> > > > > > > > > > > +++ b/mm/vmalloc.c
> > > > > > > > > > > @@ -3292,6 +3292,7 @@ struct vm_struct *find_vm_area(const void *addr)
> > > > > > > > > > >
> > > > > > > > > > > return va->vm;
> > > > > > > > > > > }
> > > > > > > > > > > +EXPORT_SYMBOL_GPL(find_vm_area);
> > > > > > > > > > >
> > > > > > > > > > This is internal. We can not just export it.
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > Uladzislau Rezki
> > > > > > > > >
> > > > > > > > > Hi Uladzislau,
> > > > > > > > >
> > > > > > > > > Thank you for the feedback. I agree that we should avoid exposing
> > > > > > > > > internal implementation details like struct vm_struct to external
> > > > > > > > > subsystems.
> > > > > > > > >
> > > > > > > > > Following Christoph's suggestion, I'm planning to encapsulate the page
> > > > > > > > > order lookup into a minimal helper instead:
> > > > > > > > >
> > > > > > > > > unsigned int vmalloc_page_order(const void *addr){
> > > > > > > > > struct vm_struct *vm;
> > > > > > > > > vm = find_vm_area(addr);
> > > > > > > > > return vm ? vm->page_order : 0;
> > > > > > > > > }
> > > > > > > > > EXPORT_SYMBOL_GPL(vmalloc_page_order);
> > > > > > > > >
> > > > > > > > > Does this approach look reasonable to you? It would keep the vm_struct
> > > > > > > > > layout private while satisfying the optimization needs of SMC.
> > > > > > > > >
> > > > > > > > Could you please clarify why you need info about page_order? I have not
> > > > > > > > looked at your second patch.
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > >
> > > > > > > > --
> > > > > > > > Uladzislau Rezki
> > > > > > >
> > > > > > > Hi Uladzislau,
> > > > > > >
> > > > > > > This stems from optimizing memory registration in SMC-R. To provide the
> > > > > > > RDMA hardware with direct access to memory buffers, we must register
> > > > > > > them with the NIC. During this process, the hardware generates one MTT
> > > > > > > entry for each physically contiguous block. Since these hardware entries
> > > > > > > are a finite and scarce resource, and SMC currently defaults to a 4KB
> > > > > > > registration granularity, a single 2MB buffer consumes 512 entries. In
> > > > > > > high-concurrency scenarios, this inefficiency quickly exhausts NIC
> > > > > > > resources and becomes a major bottleneck for system scalability.
> > > > > >
> > > > > > I believe this complexity can be avoided by using the RDMA MR pool API,
> > > > > > as other ULPs do, for example NVMe.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > >
> > > > > Hi Leon,
> > > > >
> > > > > Am I correct in assuming you are suggesting mr_pool to limit the number
> > > > > of MRs as a way to cap MTTE consumption?
> > > >
> > > > I don't see this a limit, but something that is considered standard
> > > > practice to reduce MTT consumption.
> > > >
> > > > >
> > > > > However, our goal is to maximize the total registered memory within
> > > > > the MTTE limits rather than to cap it. In SMC-R, each connection
> > > > > occupies a configurable, fixed-size registered buffer; consequently,
> > > > > the more memory we can register, the more concurrent connections
> > > > > we can support.
> > > >
> > > > It is not cap, but more efficient use of existing resources.
> > >
> > > Got it. While MRs pool might be more standard practice, but it doesn't
> > > address our specific bottleneck. In fact, smc already has its own internal
> > > MR reuse; our core issue remains reducing MTTE consumption by increasing the
> > > registration granularity to maximize the memory size mapped per MTT entry.
> >
> > And this is something MR pools can handle as well. We are going in circles,
> > so let's summarize.
>
> I believe some points need to be thoroughly clarified here:
>
> >
> > I see SMC‑R as one of the RDMA ULPs, and it should ideally rely on the
> > existing ULP API used by NVMe, NFS, and others, rather than maintaining its
> > own internal logic.
>
> SMC is not opposed to adopting newer RDMA interfaces; in fact, I have
> already planned a gradual migration to the updated RDMA APIs. We are
> currently in the process of adapting to ib_cqe, for instance. As long as
> functionality remains intact, there is no reason to oppose changes that
> reduce maintenance overhead or provide additional gains, but such a
> transition takes time.
>
> >
> > I also do not know whether vmalloc_page_order() is an appropriate solution;
> > I only want to show that we can probably achieve the same result without
> > introducing a new function.
>
> Regarding the specific issue under discussion, I believe the newer RDMA
> APIs you mentioned do not solve my problem, at least for now. My
> understanding is that regardless of how MRs are pooled, the core
> requirement is to increase the page_size parameter in ib_map_mr_sg to
> maximize the physical size mapped per MTTE. From the code I have
> examined, I see no evidence of these new APIs utilizing values other
> than 4KB.
>
> Of course, I believe that regardless of whether this issue
> currently exists, it is something the RDMA community can resolve.
> However, as I mentioned, adapting to new API takes time. Before a
> complete transition is achieved, we need to allow for some necessary
> updates to SMC.
I disagree with that statement.
SMC‑R has a long history of re‑implementing existing RDMA ULP APIs, and
not always correctly.
https://lore.kernel.org/netdev/20170510072627.12060-1-hch@lst.de/
https://lore.kernel.org/netdev/20241105112313.GE311159@unreal/#t
Thanks
>
> Thanks
>
next prev parent reply other threads:[~2026-01-29 12:22 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-23 8:23 [PATCH net-next 0/3] net/smc: buffer allocation and registration improvements D. Wythe
2026-01-23 8:23 ` [PATCH net-next 1/3] net/smc: cap allocation order for SMC-R physically contiguous buffers D. Wythe
2026-01-23 10:54 ` Alexandra Winter
2026-01-24 9:22 ` D. Wythe
2026-01-23 8:23 ` [PATCH net-next 2/3] mm: vmalloc: export find_vm_area() D. Wythe
2026-01-23 14:44 ` Christoph Hellwig
2026-01-23 18:55 ` Uladzislau Rezki
2026-01-24 9:35 ` D. Wythe
2026-01-24 10:48 ` Uladzislau Rezki
2026-01-24 14:57 ` D. Wythe
2026-01-26 10:28 ` Uladzislau Rezki
2026-01-26 12:02 ` D. Wythe
2026-01-26 16:45 ` Uladzislau Rezki
2026-01-27 13:34 ` Leon Romanovsky
2026-01-28 3:45 ` D. Wythe
2026-01-28 11:13 ` Leon Romanovsky
2026-01-28 12:44 ` D. Wythe
2026-01-28 13:49 ` Leon Romanovsky
2026-01-29 11:03 ` D. Wythe
2026-01-29 12:22 ` Leon Romanovsky [this message]
2026-01-29 14:04 ` D. Wythe
2026-01-28 18:06 ` Jason Gunthorpe
2026-01-29 11:36 ` D. Wythe
2026-01-29 13:20 ` Jason Gunthorpe
2026-01-30 8:51 ` D. Wythe
2026-01-30 15:16 ` Jason Gunthorpe
2026-02-03 9:14 ` D. Wythe
2026-01-23 8:23 ` [PATCH net-next 3/3] net/smc: optimize MTTE consumption for SMC-R buffers D. Wythe
2026-01-23 14:52 ` Christoph Hellwig
2026-01-24 9:25 ` D. Wythe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260129122202.GF10992@unreal \
--to=leon@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=alibuda@linux.alibaba.com \
--cc=davem@davemloft.net \
--cc=dust.li@linux.alibaba.com \
--cc=edumazet@google.com \
--cc=guwen@linux.alibaba.com \
--cc=horms@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=mjambigi@linux.ibm.com \
--cc=netdev@vger.kernel.org \
--cc=oliver.yang@linux.alibaba.com \
--cc=pabeni@redhat.com \
--cc=sidraya@linux.ibm.com \
--cc=tonylu@linux.alibaba.com \
--cc=urezki@gmail.com \
--cc=wenjia@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox