linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH RFC 01/11] net/mlx5e: Single flow order-0 pages for Striding RQ
       [not found] ` <1473252152-11379-2-git-send-email-saeedm@mellanox.com>
@ 2016-09-07 19:18   ` Jesper Dangaard Brouer
  2016-09-15 14:28     ` Tariq Toukan
  0 siblings, 1 reply; 3+ messages in thread
From: Jesper Dangaard Brouer @ 2016-09-07 19:18 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: iovisor-dev, netdev, Tariq Toukan, Brenden Blanco,
	Alexei Starovoitov, Tom Herbert, Martin KaFai Lau,
	Daniel Borkmann, Eric Dumazet, Jamal Hadi Salim, brouer,
	linux-mm


On Wed,  7 Sep 2016 15:42:22 +0300 Saeed Mahameed <saeedm@mellanox.com> wrote:

> From: Tariq Toukan <tariqt@mellanox.com>
> 
> To improve the memory consumption scheme, we omit the flow that
> demands and splits high-order pages in Striding RQ, and stay
> with a single Striding RQ flow that uses order-0 pages.

Thanks you for doing this! MM-list people thanks you!

For others to understand what this means:  This driver was doing
split_page() on high-order pages (for Striding RQ).  This was really bad
because it will cause fragmenting the page-allocator, and depleting the
high-order pages available quickly.

(I've left rest of patch intact below, if some MM people should be
interested in looking at the changes).

There is even a funny comment in split_page() relevant to this:

/* [...]
 * Note: this is probably too low level an operation for use in drivers.
 * Please consult with lkml before using this in your driver.
 */


> Moving to fragmented memory allows the use of larger MPWQEs,
> which reduces the number of UMR posts and filler CQEs.
> 
> Moving to a single flow allows several optimizations that improve
> performance, especially in production servers where we would
> anyway fallback to order-0 allocations:
> - inline functions that were called via function pointers.
> - improve the UMR post process.
> 
> This patch alone is expected to give a slight performance reduction.
> However, the new memory scheme gives the possibility to use a page-cache
> of a fair size, that doesn't inflate the memory footprint, which will
> dramatically fix the reduction and even give a huge gain.
> 
> We ran pktgen single-stream benchmarks, with iptables-raw-drop:
> 
> Single stride, 64 bytes:
> * 4,739,057 - baseline
> * 4,749,550 - this patch
> no reduction
> 
> Larger packets, no page cross, 1024 bytes:
> * 3,982,361 - baseline
> * 3,845,682 - this patch
> 3.5% reduction
> 
> Larger packets, every 3rd packet crosses a page, 1500 bytes:
> * 3,731,189 - baseline
> * 3,579,414 - this patch
> 4% reduction
> 

Well, the reduction does not really matter than much, because your
baseline benchmarks are from a freshly booted system, where you have
not fragmented and depleted the high-order pages yet... ;-)


> Fixes: 461017cb006a ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)")
> Fixes: bc77b240b3c5 ("net/mlx5e: Add fragmented memory support for RX multi packet WQE")
> Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
> ---
>  drivers/net/ethernet/mellanox/mlx5/core/en.h       |  54 ++--
>  drivers/net/ethernet/mellanox/mlx5/core/en_main.c  | 136 ++++++++--
>  drivers/net/ethernet/mellanox/mlx5/core/en_rx.c    | 292 ++++-----------------
>  drivers/net/ethernet/mellanox/mlx5/core/en_stats.h |   4 -
>  drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c  |   2 +-
>  5 files changed, 184 insertions(+), 304 deletions(-)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> index bf722aa..075cdfc 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> @@ -62,12 +62,12 @@
>  #define MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE                0xd
>  
>  #define MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE_MPW            0x1
> -#define MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE_MPW            0x4
> +#define MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE_MPW            0x3
>  #define MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE_MPW            0x6
>  
>  #define MLX5_MPWRQ_LOG_STRIDE_SIZE		6  /* >= 6, HW restriction */
>  #define MLX5_MPWRQ_LOG_STRIDE_SIZE_CQE_COMPRESS	8  /* >= 6, HW restriction */
> -#define MLX5_MPWRQ_LOG_WQE_SZ			17
> +#define MLX5_MPWRQ_LOG_WQE_SZ			18
>  #define MLX5_MPWRQ_WQE_PAGE_ORDER  (MLX5_MPWRQ_LOG_WQE_SZ - PAGE_SHIFT > 0 ? \
>  				    MLX5_MPWRQ_LOG_WQE_SZ - PAGE_SHIFT : 0)
>  #define MLX5_MPWRQ_PAGES_PER_WQE		BIT(MLX5_MPWRQ_WQE_PAGE_ORDER)
> @@ -293,8 +293,8 @@ struct mlx5e_rq {
>  	u32                    wqe_sz;
>  	struct sk_buff       **skb;
>  	struct mlx5e_mpw_info *wqe_info;
> +	void                  *mtt_no_align;
>  	__be32                 mkey_be;
> -	__be32                 umr_mkey_be;
>  
>  	struct device         *pdev;
>  	struct net_device     *netdev;
> @@ -323,32 +323,15 @@ struct mlx5e_rq {
>  
>  struct mlx5e_umr_dma_info {
>  	__be64                *mtt;
> -	__be64                *mtt_no_align;
>  	dma_addr_t             mtt_addr;
> -	struct mlx5e_dma_info *dma_info;
> +	struct mlx5e_dma_info  dma_info[MLX5_MPWRQ_PAGES_PER_WQE];
> +	struct mlx5e_umr_wqe   wqe;
>  };
>  
>  struct mlx5e_mpw_info {
> -	union {
> -		struct mlx5e_dma_info     dma_info;
> -		struct mlx5e_umr_dma_info umr;
> -	};
> +	struct mlx5e_umr_dma_info umr;
>  	u16 consumed_strides;
>  	u16 skbs_frags[MLX5_MPWRQ_PAGES_PER_WQE];
> -
> -	void (*dma_pre_sync)(struct device *pdev,
> -			     struct mlx5e_mpw_info *wi,
> -			     u32 wqe_offset, u32 len);
> -	void (*add_skb_frag)(struct mlx5e_rq *rq,
> -			     struct sk_buff *skb,
> -			     struct mlx5e_mpw_info *wi,
> -			     u32 page_idx, u32 frag_offset, u32 len);
> -	void (*copy_skb_header)(struct device *pdev,
> -				struct sk_buff *skb,
> -				struct mlx5e_mpw_info *wi,
> -				u32 page_idx, u32 offset,
> -				u32 headlen);
> -	void (*free_wqe)(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi);
>  };
>  
>  struct mlx5e_tx_wqe_info {
> @@ -706,24 +689,11 @@ void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
>  void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
>  bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq);
>  int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe, u16 ix);
> -int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe, u16 ix);
> +int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe,	u16 ix);
>  void mlx5e_dealloc_rx_wqe(struct mlx5e_rq *rq, u16 ix);
>  void mlx5e_dealloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix);
> -void mlx5e_post_rx_fragmented_mpwqe(struct mlx5e_rq *rq);
> -void mlx5e_complete_rx_linear_mpwqe(struct mlx5e_rq *rq,
> -				    struct mlx5_cqe64 *cqe,
> -				    u16 byte_cnt,
> -				    struct mlx5e_mpw_info *wi,
> -				    struct sk_buff *skb);
> -void mlx5e_complete_rx_fragmented_mpwqe(struct mlx5e_rq *rq,
> -					struct mlx5_cqe64 *cqe,
> -					u16 byte_cnt,
> -					struct mlx5e_mpw_info *wi,
> -					struct sk_buff *skb);
> -void mlx5e_free_rx_linear_mpwqe(struct mlx5e_rq *rq,
> -				struct mlx5e_mpw_info *wi);
> -void mlx5e_free_rx_fragmented_mpwqe(struct mlx5e_rq *rq,
> -				    struct mlx5e_mpw_info *wi);
> +void mlx5e_post_rx_mpwqe(struct mlx5e_rq *rq);
> +void mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi);
>  struct mlx5_cqe64 *mlx5e_get_cqe(struct mlx5e_cq *cq);
>  
>  void mlx5e_rx_am(struct mlx5e_rq *rq);
> @@ -810,6 +780,12 @@ static inline void mlx5e_cq_arm(struct mlx5e_cq *cq)
>  	mlx5_cq_arm(mcq, MLX5_CQ_DB_REQ_NOT, mcq->uar->map, NULL, cq->wq.cc);
>  }
>  
> +static inline u32 mlx5e_get_wqe_mtt_offset(struct mlx5e_rq *rq, u16 wqe_ix)
> +{
> +	return rq->mpwqe_mtt_offset +
> +		wqe_ix * ALIGN(MLX5_MPWRQ_PAGES_PER_WQE, 8);
> +}
> +
>  static inline int mlx5e_get_max_num_channels(struct mlx5_core_dev *mdev)
>  {
>  	return min_t(int, mdev->priv.eq_table.num_comp_vectors,
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> index 2459c7f..0db4d3b 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> @@ -138,7 +138,6 @@ static void mlx5e_update_sw_counters(struct mlx5e_priv *priv)
>  		s->rx_csum_unnecessary_inner += rq_stats->csum_unnecessary_inner;
>  		s->rx_wqe_err   += rq_stats->wqe_err;
>  		s->rx_mpwqe_filler += rq_stats->mpwqe_filler;
> -		s->rx_mpwqe_frag   += rq_stats->mpwqe_frag;
>  		s->rx_buff_alloc_err += rq_stats->buff_alloc_err;
>  		s->rx_cqe_compress_blks += rq_stats->cqe_compress_blks;
>  		s->rx_cqe_compress_pkts += rq_stats->cqe_compress_pkts;
> @@ -298,6 +297,107 @@ static void mlx5e_disable_async_events(struct mlx5e_priv *priv)
>  #define MLX5E_HW2SW_MTU(hwmtu) (hwmtu - (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN))
>  #define MLX5E_SW2HW_MTU(swmtu) (swmtu + (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN))
>  
> +static inline int mlx5e_get_wqe_mtt_sz(void)
> +{
> +	/* UMR copies MTTs in units of MLX5_UMR_MTT_ALIGNMENT bytes.
> +	 * To avoid copying garbage after the mtt array, we allocate
> +	 * a little more.
> +	 */
> +	return ALIGN(MLX5_MPWRQ_PAGES_PER_WQE * sizeof(__be64),
> +		     MLX5_UMR_MTT_ALIGNMENT);
> +}
> +
> +static inline void mlx5e_build_umr_wqe(struct mlx5e_rq *rq, struct mlx5e_sq *sq,
> +				       struct mlx5e_umr_wqe *wqe, u16 ix)
> +{
> +	struct mlx5_wqe_ctrl_seg      *cseg = &wqe->ctrl;
> +	struct mlx5_wqe_umr_ctrl_seg *ucseg = &wqe->uctrl;
> +	struct mlx5_wqe_data_seg      *dseg = &wqe->data;
> +	struct mlx5e_mpw_info *wi = &rq->wqe_info[ix];
> +	u8 ds_cnt = DIV_ROUND_UP(sizeof(*wqe), MLX5_SEND_WQE_DS);
> +	u32 umr_wqe_mtt_offset = mlx5e_get_wqe_mtt_offset(rq, ix);
> +
> +	cseg->qpn_ds    = cpu_to_be32((sq->sqn << MLX5_WQE_CTRL_QPN_SHIFT) |
> +				      ds_cnt);
> +	cseg->fm_ce_se  = MLX5_WQE_CTRL_CQ_UPDATE;
> +	cseg->imm       = rq->mkey_be;
> +
> +	ucseg->flags = MLX5_UMR_TRANSLATION_OFFSET_EN;
> +	ucseg->klm_octowords =
> +		cpu_to_be16(MLX5_MTT_OCTW(MLX5_MPWRQ_PAGES_PER_WQE));
> +	ucseg->bsf_octowords =
> +		cpu_to_be16(MLX5_MTT_OCTW(umr_wqe_mtt_offset));
> +	ucseg->mkey_mask     = cpu_to_be64(MLX5_MKEY_MASK_FREE);
> +
> +	dseg->lkey = sq->mkey_be;
> +	dseg->addr = cpu_to_be64(wi->umr.mtt_addr);
> +}
> +
> +static int mlx5e_rq_alloc_mpwqe_info(struct mlx5e_rq *rq,
> +				     struct mlx5e_channel *c)
> +{
> +	int wq_sz = mlx5_wq_ll_get_size(&rq->wq);
> +	int mtt_sz = mlx5e_get_wqe_mtt_sz();
> +	int mtt_alloc = mtt_sz + MLX5_UMR_ALIGN - 1;
> +	int i;
> +
> +	rq->wqe_info = kzalloc_node(wq_sz * sizeof(*rq->wqe_info),
> +				    GFP_KERNEL, cpu_to_node(c->cpu));
> +	if (!rq->wqe_info)
> +		goto err_out;
> +
> +	/* We allocate more than mtt_sz as we will align the pointer */
> +	rq->mtt_no_align = kzalloc_node(mtt_alloc * wq_sz, GFP_KERNEL,
> +					cpu_to_node(c->cpu));
> +	if (unlikely(!rq->mtt_no_align))
> +		goto err_free_wqe_info;
> +
> +	for (i = 0; i < wq_sz; i++) {
> +		struct mlx5e_mpw_info *wi = &rq->wqe_info[i];
> +
> +		wi->umr.mtt = PTR_ALIGN(rq->mtt_no_align + i * mtt_alloc,
> +					MLX5_UMR_ALIGN);
> +		wi->umr.mtt_addr = dma_map_single(c->pdev, wi->umr.mtt, mtt_sz,
> +						  PCI_DMA_TODEVICE);
> +		if (unlikely(dma_mapping_error(c->pdev, wi->umr.mtt_addr)))
> +			goto err_unmap_mtts;
> +
> +		mlx5e_build_umr_wqe(rq, &c->icosq, &wi->umr.wqe, i);
> +	}
> +
> +	return 0;
> +
> +err_unmap_mtts:
> +	while (--i >= 0) {
> +		struct mlx5e_mpw_info *wi = &rq->wqe_info[i];
> +
> +		dma_unmap_single(c->pdev, wi->umr.mtt_addr, mtt_sz,
> +				 PCI_DMA_TODEVICE);
> +	}
> +	kfree(rq->mtt_no_align);
> +err_free_wqe_info:
> +	kfree(rq->wqe_info);
> +
> +err_out:
> +	return -ENOMEM;
> +}
> +
> +static void mlx5e_rq_free_mpwqe_info(struct mlx5e_rq *rq)
> +{
> +	int wq_sz = mlx5_wq_ll_get_size(&rq->wq);
> +	int mtt_sz = mlx5e_get_wqe_mtt_sz();
> +	int i;
> +
> +	for (i = 0; i < wq_sz; i++) {
> +		struct mlx5e_mpw_info *wi = &rq->wqe_info[i];
> +
> +		dma_unmap_single(rq->pdev, wi->umr.mtt_addr, mtt_sz,
> +				 PCI_DMA_TODEVICE);
> +	}
> +	kfree(rq->mtt_no_align);
> +	kfree(rq->wqe_info);
> +}
> +
>  static int mlx5e_create_rq(struct mlx5e_channel *c,
>  			   struct mlx5e_rq_param *param,
>  			   struct mlx5e_rq *rq)
> @@ -322,14 +422,16 @@ static int mlx5e_create_rq(struct mlx5e_channel *c,
>  
>  	wq_sz = mlx5_wq_ll_get_size(&rq->wq);
>  
> +	rq->wq_type = priv->params.rq_wq_type;
> +	rq->pdev    = c->pdev;
> +	rq->netdev  = c->netdev;
> +	rq->tstamp  = &priv->tstamp;
> +	rq->channel = c;
> +	rq->ix      = c->ix;
> +	rq->priv    = c->priv;
> +
>  	switch (priv->params.rq_wq_type) {
>  	case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
> -		rq->wqe_info = kzalloc_node(wq_sz * sizeof(*rq->wqe_info),
> -					    GFP_KERNEL, cpu_to_node(c->cpu));
> -		if (!rq->wqe_info) {
> -			err = -ENOMEM;
> -			goto err_rq_wq_destroy;
> -		}
>  		rq->handle_rx_cqe = mlx5e_handle_rx_cqe_mpwrq;
>  		rq->alloc_wqe = mlx5e_alloc_rx_mpwqe;
>  		rq->dealloc_wqe = mlx5e_dealloc_rx_mpwqe;
> @@ -341,6 +443,10 @@ static int mlx5e_create_rq(struct mlx5e_channel *c,
>  		rq->mpwqe_num_strides = BIT(priv->params.mpwqe_log_num_strides);
>  		rq->wqe_sz = rq->mpwqe_stride_sz * rq->mpwqe_num_strides;
>  		byte_count = rq->wqe_sz;
> +		rq->mkey_be = cpu_to_be32(c->priv->umr_mkey.key);
> +		err = mlx5e_rq_alloc_mpwqe_info(rq, c);
> +		if (err)
> +			goto err_rq_wq_destroy;
>  		break;
>  	default: /* MLX5_WQ_TYPE_LINKED_LIST */
>  		rq->skb = kzalloc_node(wq_sz * sizeof(*rq->skb), GFP_KERNEL,
> @@ -359,27 +465,19 @@ static int mlx5e_create_rq(struct mlx5e_channel *c,
>  		rq->wqe_sz = SKB_DATA_ALIGN(rq->wqe_sz);
>  		byte_count = rq->wqe_sz;
>  		byte_count |= MLX5_HW_START_PADDING;
> +		rq->mkey_be = c->mkey_be;
>  	}
>  
>  	for (i = 0; i < wq_sz; i++) {
>  		struct mlx5e_rx_wqe *wqe = mlx5_wq_ll_get_wqe(&rq->wq, i);
>  
>  		wqe->data.byte_count = cpu_to_be32(byte_count);
> +		wqe->data.lkey = rq->mkey_be;
>  	}
>  
>  	INIT_WORK(&rq->am.work, mlx5e_rx_am_work);
>  	rq->am.mode = priv->params.rx_cq_period_mode;
>  
> -	rq->wq_type = priv->params.rq_wq_type;
> -	rq->pdev    = c->pdev;
> -	rq->netdev  = c->netdev;
> -	rq->tstamp  = &priv->tstamp;
> -	rq->channel = c;
> -	rq->ix      = c->ix;
> -	rq->priv    = c->priv;
> -	rq->mkey_be = c->mkey_be;
> -	rq->umr_mkey_be = cpu_to_be32(c->priv->umr_mkey.key);
> -
>  	return 0;
>  
>  err_rq_wq_destroy:
> @@ -392,7 +490,7 @@ static void mlx5e_destroy_rq(struct mlx5e_rq *rq)
>  {
>  	switch (rq->wq_type) {
>  	case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
> -		kfree(rq->wqe_info);
> +		mlx5e_rq_free_mpwqe_info(rq);
>  		break;
>  	default: /* MLX5_WQ_TYPE_LINKED_LIST */
>  		kfree(rq->skb);
> @@ -530,7 +628,7 @@ static void mlx5e_free_rx_descs(struct mlx5e_rq *rq)
>  
>  	/* UMR WQE (if in progress) is always at wq->head */
>  	if (test_bit(MLX5E_RQ_STATE_UMR_WQE_IN_PROGRESS, &rq->state))
> -		mlx5e_free_rx_fragmented_mpwqe(rq, &rq->wqe_info[wq->head]);
> +		mlx5e_free_rx_mpwqe(rq, &rq->wqe_info[wq->head]);
>  
>  	while (!mlx5_wq_ll_is_empty(wq)) {
>  		wqe_ix_be = *wq->tail_next;
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> index b6f8ebb..8ad4d32 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> @@ -200,7 +200,6 @@ int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe, u16 ix)
>  
>  	*((dma_addr_t *)skb->cb) = dma_addr;
>  	wqe->data.addr = cpu_to_be64(dma_addr);
> -	wqe->data.lkey = rq->mkey_be;
>  
>  	rq->skb[ix] = skb;
>  
> @@ -231,44 +230,11 @@ static inline int mlx5e_mpwqe_strides_per_page(struct mlx5e_rq *rq)
>  	return rq->mpwqe_num_strides >> MLX5_MPWRQ_WQE_PAGE_ORDER;
>  }
>  
> -static inline void
> -mlx5e_dma_pre_sync_linear_mpwqe(struct device *pdev,
> -				struct mlx5e_mpw_info *wi,
> -				u32 wqe_offset, u32 len)
> -{
> -	dma_sync_single_for_cpu(pdev, wi->dma_info.addr + wqe_offset,
> -				len, DMA_FROM_DEVICE);
> -}
> -
> -static inline void
> -mlx5e_dma_pre_sync_fragmented_mpwqe(struct device *pdev,
> -				    struct mlx5e_mpw_info *wi,
> -				    u32 wqe_offset, u32 len)
> -{
> -	/* No dma pre sync for fragmented MPWQE */
> -}
> -
> -static inline void
> -mlx5e_add_skb_frag_linear_mpwqe(struct mlx5e_rq *rq,
> -				struct sk_buff *skb,
> -				struct mlx5e_mpw_info *wi,
> -				u32 page_idx, u32 frag_offset,
> -				u32 len)
> -{
> -	unsigned int truesize =	ALIGN(len, rq->mpwqe_stride_sz);
> -
> -	wi->skbs_frags[page_idx]++;
> -	skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
> -			&wi->dma_info.page[page_idx], frag_offset,
> -			len, truesize);
> -}
> -
> -static inline void
> -mlx5e_add_skb_frag_fragmented_mpwqe(struct mlx5e_rq *rq,
> -				    struct sk_buff *skb,
> -				    struct mlx5e_mpw_info *wi,
> -				    u32 page_idx, u32 frag_offset,
> -				    u32 len)
> +static inline void mlx5e_add_skb_frag_mpwqe(struct mlx5e_rq *rq,
> +					    struct sk_buff *skb,
> +					    struct mlx5e_mpw_info *wi,
> +					    u32 page_idx, u32 frag_offset,
> +					    u32 len)
>  {
>  	unsigned int truesize =	ALIGN(len, rq->mpwqe_stride_sz);
>  
> @@ -282,24 +248,11 @@ mlx5e_add_skb_frag_fragmented_mpwqe(struct mlx5e_rq *rq,
>  }
>  
>  static inline void
> -mlx5e_copy_skb_header_linear_mpwqe(struct device *pdev,
> -				   struct sk_buff *skb,
> -				   struct mlx5e_mpw_info *wi,
> -				   u32 page_idx, u32 offset,
> -				   u32 headlen)
> -{
> -	struct page *page = &wi->dma_info.page[page_idx];
> -
> -	skb_copy_to_linear_data(skb, page_address(page) + offset,
> -				ALIGN(headlen, sizeof(long)));
> -}
> -
> -static inline void
> -mlx5e_copy_skb_header_fragmented_mpwqe(struct device *pdev,
> -				       struct sk_buff *skb,
> -				       struct mlx5e_mpw_info *wi,
> -				       u32 page_idx, u32 offset,
> -				       u32 headlen)
> +mlx5e_copy_skb_header_mpwqe(struct device *pdev,
> +			    struct sk_buff *skb,
> +			    struct mlx5e_mpw_info *wi,
> +			    u32 page_idx, u32 offset,
> +			    u32 headlen)
>  {
>  	u16 headlen_pg = min_t(u32, headlen, PAGE_SIZE - offset);
>  	struct mlx5e_dma_info *dma_info = &wi->umr.dma_info[page_idx];
> @@ -324,46 +277,9 @@ mlx5e_copy_skb_header_fragmented_mpwqe(struct device *pdev,
>  	}
>  }
>  
> -static u32 mlx5e_get_wqe_mtt_offset(struct mlx5e_rq *rq, u16 wqe_ix)
> -{
> -	return rq->mpwqe_mtt_offset +
> -		wqe_ix * ALIGN(MLX5_MPWRQ_PAGES_PER_WQE, 8);
> -}
> -
> -static void mlx5e_build_umr_wqe(struct mlx5e_rq *rq,
> -				struct mlx5e_sq *sq,
> -				struct mlx5e_umr_wqe *wqe,
> -				u16 ix)
> +static inline void mlx5e_post_umr_wqe(struct mlx5e_rq *rq, u16 ix)
>  {
> -	struct mlx5_wqe_ctrl_seg      *cseg = &wqe->ctrl;
> -	struct mlx5_wqe_umr_ctrl_seg *ucseg = &wqe->uctrl;
> -	struct mlx5_wqe_data_seg      *dseg = &wqe->data;
>  	struct mlx5e_mpw_info *wi = &rq->wqe_info[ix];
> -	u8 ds_cnt = DIV_ROUND_UP(sizeof(*wqe), MLX5_SEND_WQE_DS);
> -	u32 umr_wqe_mtt_offset = mlx5e_get_wqe_mtt_offset(rq, ix);
> -
> -	memset(wqe, 0, sizeof(*wqe));
> -	cseg->opmod_idx_opcode =
> -		cpu_to_be32((sq->pc << MLX5_WQE_CTRL_WQE_INDEX_SHIFT) |
> -			    MLX5_OPCODE_UMR);
> -	cseg->qpn_ds    = cpu_to_be32((sq->sqn << MLX5_WQE_CTRL_QPN_SHIFT) |
> -				      ds_cnt);
> -	cseg->fm_ce_se  = MLX5_WQE_CTRL_CQ_UPDATE;
> -	cseg->imm       = rq->umr_mkey_be;
> -
> -	ucseg->flags = MLX5_UMR_TRANSLATION_OFFSET_EN;
> -	ucseg->klm_octowords =
> -		cpu_to_be16(MLX5_MTT_OCTW(MLX5_MPWRQ_PAGES_PER_WQE));
> -	ucseg->bsf_octowords =
> -		cpu_to_be16(MLX5_MTT_OCTW(umr_wqe_mtt_offset));
> -	ucseg->mkey_mask     = cpu_to_be64(MLX5_MKEY_MASK_FREE);
> -
> -	dseg->lkey = sq->mkey_be;
> -	dseg->addr = cpu_to_be64(wi->umr.mtt_addr);
> -}
> -
> -static void mlx5e_post_umr_wqe(struct mlx5e_rq *rq, u16 ix)
> -{
>  	struct mlx5e_sq *sq = &rq->channel->icosq;
>  	struct mlx5_wq_cyc *wq = &sq->wq;
>  	struct mlx5e_umr_wqe *wqe;
> @@ -378,30 +294,22 @@ static void mlx5e_post_umr_wqe(struct mlx5e_rq *rq, u16 ix)
>  	}
>  
>  	wqe = mlx5_wq_cyc_get_wqe(wq, pi);
> -	mlx5e_build_umr_wqe(rq, sq, wqe, ix);
> +	memcpy(wqe, &wi->umr.wqe, sizeof(*wqe));
> +	wqe->ctrl.opmod_idx_opcode =
> +		cpu_to_be32((sq->pc << MLX5_WQE_CTRL_WQE_INDEX_SHIFT) |
> +			    MLX5_OPCODE_UMR);
> +
>  	sq->ico_wqe_info[pi].opcode = MLX5_OPCODE_UMR;
>  	sq->ico_wqe_info[pi].num_wqebbs = num_wqebbs;
>  	sq->pc += num_wqebbs;
>  	mlx5e_tx_notify_hw(sq, &wqe->ctrl, 0);
>  }
>  
> -static inline int mlx5e_get_wqe_mtt_sz(void)
> -{
> -	/* UMR copies MTTs in units of MLX5_UMR_MTT_ALIGNMENT bytes.
> -	 * To avoid copying garbage after the mtt array, we allocate
> -	 * a little more.
> -	 */
> -	return ALIGN(MLX5_MPWRQ_PAGES_PER_WQE * sizeof(__be64),
> -		     MLX5_UMR_MTT_ALIGNMENT);
> -}
> -
> -static int mlx5e_alloc_and_map_page(struct mlx5e_rq *rq,
> -				    struct mlx5e_mpw_info *wi,
> -				    int i)
> +static inline int mlx5e_alloc_and_map_page(struct mlx5e_rq *rq,
> +					   struct mlx5e_mpw_info *wi,
> +					   int i)
>  {
> -	struct page *page;
> -
> -	page = dev_alloc_page();
> +	struct page *page = dev_alloc_page();
>  	if (unlikely(!page))
>  		return -ENOMEM;
>  
> @@ -417,47 +325,25 @@ static int mlx5e_alloc_and_map_page(struct mlx5e_rq *rq,
>  	return 0;
>  }
>  
> -static int mlx5e_alloc_rx_fragmented_mpwqe(struct mlx5e_rq *rq,
> -					   struct mlx5e_rx_wqe *wqe,
> -					   u16 ix)
> +static int mlx5e_alloc_rx_umr_mpwqe(struct mlx5e_rq *rq,
> +				    struct mlx5e_rx_wqe *wqe,
> +				    u16 ix)
>  {
>  	struct mlx5e_mpw_info *wi = &rq->wqe_info[ix];
> -	int mtt_sz = mlx5e_get_wqe_mtt_sz();
>  	u64 dma_offset = (u64)mlx5e_get_wqe_mtt_offset(rq, ix) << PAGE_SHIFT;
> +	int pg_strides = mlx5e_mpwqe_strides_per_page(rq);
> +	int err;
>  	int i;
>  
> -	wi->umr.dma_info = kmalloc(sizeof(*wi->umr.dma_info) *
> -				   MLX5_MPWRQ_PAGES_PER_WQE,
> -				   GFP_ATOMIC);
> -	if (unlikely(!wi->umr.dma_info))
> -		goto err_out;
> -
> -	/* We allocate more than mtt_sz as we will align the pointer */
> -	wi->umr.mtt_no_align = kzalloc(mtt_sz + MLX5_UMR_ALIGN - 1,
> -				       GFP_ATOMIC);
> -	if (unlikely(!wi->umr.mtt_no_align))
> -		goto err_free_umr;
> -
> -	wi->umr.mtt = PTR_ALIGN(wi->umr.mtt_no_align, MLX5_UMR_ALIGN);
> -	wi->umr.mtt_addr = dma_map_single(rq->pdev, wi->umr.mtt, mtt_sz,
> -					  PCI_DMA_TODEVICE);
> -	if (unlikely(dma_mapping_error(rq->pdev, wi->umr.mtt_addr)))
> -		goto err_free_mtt;
> -
>  	for (i = 0; i < MLX5_MPWRQ_PAGES_PER_WQE; i++) {
> -		if (unlikely(mlx5e_alloc_and_map_page(rq, wi, i)))
> +		err = mlx5e_alloc_and_map_page(rq, wi, i);
> +		if (unlikely(err))
>  			goto err_unmap;
> -		page_ref_add(wi->umr.dma_info[i].page,
> -			     mlx5e_mpwqe_strides_per_page(rq));
> +		page_ref_add(wi->umr.dma_info[i].page, pg_strides);
>  		wi->skbs_frags[i] = 0;
>  	}
>  
>  	wi->consumed_strides = 0;
> -	wi->dma_pre_sync = mlx5e_dma_pre_sync_fragmented_mpwqe;
> -	wi->add_skb_frag = mlx5e_add_skb_frag_fragmented_mpwqe;
> -	wi->copy_skb_header = mlx5e_copy_skb_header_fragmented_mpwqe;
> -	wi->free_wqe     = mlx5e_free_rx_fragmented_mpwqe;
> -	wqe->data.lkey = rq->umr_mkey_be;
>  	wqe->data.addr = cpu_to_be64(dma_offset);
>  
>  	return 0;
> @@ -466,41 +352,28 @@ err_unmap:
>  	while (--i >= 0) {
>  		dma_unmap_page(rq->pdev, wi->umr.dma_info[i].addr, PAGE_SIZE,
>  			       PCI_DMA_FROMDEVICE);
> -		page_ref_sub(wi->umr.dma_info[i].page,
> -			     mlx5e_mpwqe_strides_per_page(rq));
> +		page_ref_sub(wi->umr.dma_info[i].page, pg_strides);
>  		put_page(wi->umr.dma_info[i].page);
>  	}
> -	dma_unmap_single(rq->pdev, wi->umr.mtt_addr, mtt_sz, PCI_DMA_TODEVICE);
> -
> -err_free_mtt:
> -	kfree(wi->umr.mtt_no_align);
> -
> -err_free_umr:
> -	kfree(wi->umr.dma_info);
>  
> -err_out:
> -	return -ENOMEM;
> +	return err;
>  }
>  
> -void mlx5e_free_rx_fragmented_mpwqe(struct mlx5e_rq *rq,
> -				    struct mlx5e_mpw_info *wi)
> +void mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi)
>  {
> -	int mtt_sz = mlx5e_get_wqe_mtt_sz();
> +	int pg_strides = mlx5e_mpwqe_strides_per_page(rq);
>  	int i;
>  
>  	for (i = 0; i < MLX5_MPWRQ_PAGES_PER_WQE; i++) {
>  		dma_unmap_page(rq->pdev, wi->umr.dma_info[i].addr, PAGE_SIZE,
>  			       PCI_DMA_FROMDEVICE);
>  		page_ref_sub(wi->umr.dma_info[i].page,
> -			mlx5e_mpwqe_strides_per_page(rq) - wi->skbs_frags[i]);
> +			     pg_strides - wi->skbs_frags[i]);
>  		put_page(wi->umr.dma_info[i].page);
>  	}
> -	dma_unmap_single(rq->pdev, wi->umr.mtt_addr, mtt_sz, PCI_DMA_TODEVICE);
> -	kfree(wi->umr.mtt_no_align);
> -	kfree(wi->umr.dma_info);
>  }
>  
> -void mlx5e_post_rx_fragmented_mpwqe(struct mlx5e_rq *rq)
> +void mlx5e_post_rx_mpwqe(struct mlx5e_rq *rq)
>  {
>  	struct mlx5_wq_ll *wq = &rq->wq;
>  	struct mlx5e_rx_wqe *wqe = mlx5_wq_ll_get_wqe(wq, wq->head);
> @@ -508,12 +381,11 @@ void mlx5e_post_rx_fragmented_mpwqe(struct mlx5e_rq *rq)
>  	clear_bit(MLX5E_RQ_STATE_UMR_WQE_IN_PROGRESS, &rq->state);
>  
>  	if (unlikely(test_bit(MLX5E_RQ_STATE_FLUSH, &rq->state))) {
> -		mlx5e_free_rx_fragmented_mpwqe(rq, &rq->wqe_info[wq->head]);
> +		mlx5e_free_rx_mpwqe(rq, &rq->wqe_info[wq->head]);
>  		return;
>  	}
>  
>  	mlx5_wq_ll_push(wq, be16_to_cpu(wqe->next.next_wqe_index));
> -	rq->stats.mpwqe_frag++;
>  
>  	/* ensure wqes are visible to device before updating doorbell record */
>  	dma_wmb();
> @@ -521,84 +393,23 @@ void mlx5e_post_rx_fragmented_mpwqe(struct mlx5e_rq *rq)
>  	mlx5_wq_ll_update_db_record(wq);
>  }
>  
> -static int mlx5e_alloc_rx_linear_mpwqe(struct mlx5e_rq *rq,
> -				       struct mlx5e_rx_wqe *wqe,
> -				       u16 ix)
> -{
> -	struct mlx5e_mpw_info *wi = &rq->wqe_info[ix];
> -	gfp_t gfp_mask;
> -	int i;
> -
> -	gfp_mask = GFP_ATOMIC | __GFP_COLD | __GFP_MEMALLOC;
> -	wi->dma_info.page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
> -					     MLX5_MPWRQ_WQE_PAGE_ORDER);
> -	if (unlikely(!wi->dma_info.page))
> -		return -ENOMEM;
> -
> -	wi->dma_info.addr = dma_map_page(rq->pdev, wi->dma_info.page, 0,
> -					 rq->wqe_sz, PCI_DMA_FROMDEVICE);
> -	if (unlikely(dma_mapping_error(rq->pdev, wi->dma_info.addr))) {
> -		put_page(wi->dma_info.page);
> -		return -ENOMEM;
> -	}
> -
> -	/* We split the high-order page into order-0 ones and manage their
> -	 * reference counter to minimize the memory held by small skb fragments
> -	 */
> -	split_page(wi->dma_info.page, MLX5_MPWRQ_WQE_PAGE_ORDER);
> -	for (i = 0; i < MLX5_MPWRQ_PAGES_PER_WQE; i++) {
> -		page_ref_add(&wi->dma_info.page[i],
> -			     mlx5e_mpwqe_strides_per_page(rq));
> -		wi->skbs_frags[i] = 0;
> -	}
> -
> -	wi->consumed_strides = 0;
> -	wi->dma_pre_sync = mlx5e_dma_pre_sync_linear_mpwqe;
> -	wi->add_skb_frag = mlx5e_add_skb_frag_linear_mpwqe;
> -	wi->copy_skb_header = mlx5e_copy_skb_header_linear_mpwqe;
> -	wi->free_wqe     = mlx5e_free_rx_linear_mpwqe;
> -	wqe->data.lkey = rq->mkey_be;
> -	wqe->data.addr = cpu_to_be64(wi->dma_info.addr);
> -
> -	return 0;
> -}
> -
> -void mlx5e_free_rx_linear_mpwqe(struct mlx5e_rq *rq,
> -				struct mlx5e_mpw_info *wi)
> -{
> -	int i;
> -
> -	dma_unmap_page(rq->pdev, wi->dma_info.addr, rq->wqe_sz,
> -		       PCI_DMA_FROMDEVICE);
> -	for (i = 0; i < MLX5_MPWRQ_PAGES_PER_WQE; i++) {
> -		page_ref_sub(&wi->dma_info.page[i],
> -			mlx5e_mpwqe_strides_per_page(rq) - wi->skbs_frags[i]);
> -		put_page(&wi->dma_info.page[i]);
> -	}
> -}
> -
> -int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe, u16 ix)
> +int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe,	u16 ix)
>  {
>  	int err;
>  
> -	err = mlx5e_alloc_rx_linear_mpwqe(rq, wqe, ix);
> -	if (unlikely(err)) {
> -		err = mlx5e_alloc_rx_fragmented_mpwqe(rq, wqe, ix);
> -		if (unlikely(err))
> -			return err;
> -		set_bit(MLX5E_RQ_STATE_UMR_WQE_IN_PROGRESS, &rq->state);
> -		mlx5e_post_umr_wqe(rq, ix);
> -		return -EBUSY;
> -	}
> -
> -	return 0;
> +	err = mlx5e_alloc_rx_umr_mpwqe(rq, wqe, ix);
> +	if (unlikely(err))
> +		return err;
> +	set_bit(MLX5E_RQ_STATE_UMR_WQE_IN_PROGRESS, &rq->state);
> +	mlx5e_post_umr_wqe(rq, ix);
> +	return -EBUSY;
>  }
>  
>  void mlx5e_dealloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
>  {
>  	struct mlx5e_mpw_info *wi = &rq->wqe_info[ix];
>  
> -	wi->free_wqe(rq, wi);
> +	mlx5e_free_rx_mpwqe(rq, wi);
>  }
>  
>  #define RQ_CANNOT_POST(rq) \
> @@ -617,9 +428,10 @@ bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq)
>  		int err;
>  
>  		err = rq->alloc_wqe(rq, wqe, wq->head);
> +		if (err == -EBUSY)
> +			return true;
>  		if (unlikely(err)) {
> -			if (err != -EBUSY)
> -				rq->stats.buff_alloc_err++;
> +			rq->stats.buff_alloc_err++;
>  			break;
>  		}
>  
> @@ -823,7 +635,6 @@ static inline void mlx5e_mpwqe_fill_rx_skb(struct mlx5e_rq *rq,
>  					   u32 cqe_bcnt,
>  					   struct sk_buff *skb)
>  {
> -	u32 consumed_bytes = ALIGN(cqe_bcnt, rq->mpwqe_stride_sz);
>  	u16 stride_ix      = mpwrq_get_cqe_stride_index(cqe);
>  	u32 wqe_offset     = stride_ix * rq->mpwqe_stride_sz;
>  	u32 head_offset    = wqe_offset & (PAGE_SIZE - 1);
> @@ -837,21 +648,20 @@ static inline void mlx5e_mpwqe_fill_rx_skb(struct mlx5e_rq *rq,
>  		page_idx++;
>  		frag_offset -= PAGE_SIZE;
>  	}
> -	wi->dma_pre_sync(rq->pdev, wi, wqe_offset, consumed_bytes);
>  
>  	while (byte_cnt) {
>  		u32 pg_consumed_bytes =
>  			min_t(u32, PAGE_SIZE - frag_offset, byte_cnt);
>  
> -		wi->add_skb_frag(rq, skb, wi, page_idx, frag_offset,
> -				 pg_consumed_bytes);
> +		mlx5e_add_skb_frag_mpwqe(rq, skb, wi, page_idx, frag_offset,
> +					 pg_consumed_bytes);
>  		byte_cnt -= pg_consumed_bytes;
>  		frag_offset = 0;
>  		page_idx++;
>  	}
>  	/* copy header */
> -	wi->copy_skb_header(rq->pdev, skb, wi, head_page_idx, head_offset,
> -			    headlen);
> +	mlx5e_copy_skb_header_mpwqe(rq->pdev, skb, wi, head_page_idx,
> +				    head_offset, headlen);
>  	/* skb linear part was allocated with headlen and aligned to long */
>  	skb->tail += headlen;
>  	skb->len  += headlen;
> @@ -896,7 +706,7 @@ mpwrq_cqe_out:
>  	if (likely(wi->consumed_strides < rq->mpwqe_num_strides))
>  		return;
>  
> -	wi->free_wqe(rq, wi);
> +	mlx5e_free_rx_mpwqe(rq, wi);
>  	mlx5_wq_ll_pop(&rq->wq, cqe->wqe_id, &wqe->next.next_wqe_index);
>  }
>  
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
> index 499487c..1f56543 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
> @@ -73,7 +73,6 @@ struct mlx5e_sw_stats {
>  	u64 tx_xmit_more;
>  	u64 rx_wqe_err;
>  	u64 rx_mpwqe_filler;
> -	u64 rx_mpwqe_frag;
>  	u64 rx_buff_alloc_err;
>  	u64 rx_cqe_compress_blks;
>  	u64 rx_cqe_compress_pkts;
> @@ -105,7 +104,6 @@ static const struct counter_desc sw_stats_desc[] = {
>  	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xmit_more) },
>  	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_wqe_err) },
>  	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_mpwqe_filler) },
> -	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_mpwqe_frag) },
>  	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_buff_alloc_err) },
>  	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_blks) },
>  	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_pkts) },
> @@ -274,7 +272,6 @@ struct mlx5e_rq_stats {
>  	u64 lro_bytes;
>  	u64 wqe_err;
>  	u64 mpwqe_filler;
> -	u64 mpwqe_frag;
>  	u64 buff_alloc_err;
>  	u64 cqe_compress_blks;
>  	u64 cqe_compress_pkts;
> @@ -290,7 +287,6 @@ static const struct counter_desc rq_stats_desc[] = {
>  	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, lro_bytes) },
>  	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, wqe_err) },
>  	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, mpwqe_filler) },
> -	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, mpwqe_frag) },
>  	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, buff_alloc_err) },
>  	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cqe_compress_blks) },
>  	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cqe_compress_pkts) },
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
> index 9bf33bb..08d8b0c 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
> @@ -87,7 +87,7 @@ static void mlx5e_poll_ico_cq(struct mlx5e_cq *cq)
>  		case MLX5_OPCODE_NOP:
>  			break;
>  		case MLX5_OPCODE_UMR:
> -			mlx5e_post_rx_fragmented_mpwqe(&sq->channel->rq);
> +			mlx5e_post_rx_mpwqe(&sq->channel->rq);
>  			break;
>  		default:
>  			WARN_ONCE(true,



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH RFC 04/11] net/mlx5e: Build RX SKB on demand
       [not found] ` <1473252152-11379-5-git-send-email-saeedm@mellanox.com>
@ 2016-09-07 19:32   ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 3+ messages in thread
From: Jesper Dangaard Brouer @ 2016-09-07 19:32 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: iovisor-dev, netdev, Tariq Toukan, Brenden Blanco,
	Alexei Starovoitov, Tom Herbert, Martin KaFai Lau,
	Daniel Borkmann, Eric Dumazet, Jamal Hadi Salim, brouer,
	linux-mm


On Wed,  7 Sep 2016 15:42:25 +0300 Saeed Mahameed <saeedm@mellanox.com> wrote:

> For non-striding RQ configuration before this patch we had a ring
> with pre-allocated SKBs and mapped the SKB->data buffers for
> device.
> 
> For robustness and better RX data buffers management, we allocate a
> page per packet and build_skb around it.
> 
> This patch (which is a prerequisite for XDP) will actually reduce
> performance for normal stack usage, because we are now hitting a bottleneck
> in the page allocator. A later patch of page reuse mechanism will be
> needed to restore or even improve performance in comparison to the old
> RX scheme.

Yes, it is true that there is a performance reduction (for normal
stack, not XDP) caused by hitting a bottleneck in the page allocator.

I actually have a PoC implementation of my page_pool, that show we
regain the performance and then some.  Based on an earlier version of
this patch, where I hook it into the mlx5 driver (50Gbit/s version).


You desc might be a bit outdated, as this patch and the patch before
does contain you own driver local page-cache recycle facility.  And you
also show that you regain quite a lot of the lost performance.

You driver local page_cache does have its limitations (see comments on
other patch), as it depend on timely refcnt decrease, by the users of
the page.  If they hold onto pages (like TCP) then your page-cache will
not be efficient.

 
> Packet rate performance testing was done with pktgen 64B packets on
> xmit side and TC drop action on RX side.

I assume this is TC _ingress_ dropping, like [1]

[1] https://github.com/netoptimizer/network-testing/blob/master/bin/tc_ingress_drop.sh

> CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
> 
> Comparison is done between:
>  1.Baseline, before 'net/mlx5e: Build RX SKB on demand'
>  2.Build SKB with RX page cache (This patch)
> 
> Streams    Baseline    Build SKB+page-cache    Improvement
> -----------------------------------------------------------
> 1          4.33Mpps      5.51Mpps                27%
> 2          7.35Mpps      11.5Mpps                52%
> 4          14.0Mpps      16.3Mpps                16%
> 8          22.2Mpps      29.6Mpps                20%
> 16         24.8Mpps      34.0Mpps                17%

The improvements gained from using your page-cache is impressively high.

Thanks for working on this,
 --Jesper
 
> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
> ---
>  drivers/net/ethernet/mellanox/mlx5/core/en.h      |  10 +-
>  drivers/net/ethernet/mellanox/mlx5/core/en_main.c |  31 +++-
>  drivers/net/ethernet/mellanox/mlx5/core/en_rx.c   | 215 +++++++++++-----------
>  3 files changed, 133 insertions(+), 123 deletions(-)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> index afbdf70..a346112 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> @@ -65,6 +65,8 @@
>  #define MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE_MPW            0x3
>  #define MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE_MPW            0x6
>  
> +#define MLX5_RX_HEADROOM NET_SKB_PAD
> +
>  #define MLX5_MPWRQ_LOG_STRIDE_SIZE		6  /* >= 6, HW restriction */
>  #define MLX5_MPWRQ_LOG_STRIDE_SIZE_CQE_COMPRESS	8  /* >= 6, HW restriction */
>  #define MLX5_MPWRQ_LOG_WQE_SZ			18
> @@ -302,10 +304,14 @@ struct mlx5e_page_cache {
>  struct mlx5e_rq {
>  	/* data path */
>  	struct mlx5_wq_ll      wq;
> -	u32                    wqe_sz;
> -	struct sk_buff       **skb;
> +
> +	struct mlx5e_dma_info *dma_info;
>  	struct mlx5e_mpw_info *wqe_info;
>  	void                  *mtt_no_align;
> +	struct {
> +		u8             page_order;
> +		u32            wqe_sz;    /* wqe data buffer size */
> +	} buff;
>  	__be32                 mkey_be;
>  
>  	struct device         *pdev;
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> index c84702c..c9f1dea 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> @@ -411,6 +411,8 @@ static int mlx5e_create_rq(struct mlx5e_channel *c,
>  	void *rqc = param->rqc;
>  	void *rqc_wq = MLX5_ADDR_OF(rqc, rqc, wq);
>  	u32 byte_count;
> +	u32 frag_sz;
> +	int npages;
>  	int wq_sz;
>  	int err;
>  	int i;
> @@ -445,29 +447,40 @@ static int mlx5e_create_rq(struct mlx5e_channel *c,
>  
>  		rq->mpwqe_stride_sz = BIT(priv->params.mpwqe_log_stride_sz);
>  		rq->mpwqe_num_strides = BIT(priv->params.mpwqe_log_num_strides);
> -		rq->wqe_sz = rq->mpwqe_stride_sz * rq->mpwqe_num_strides;
> -		byte_count = rq->wqe_sz;
> +
> +		rq->buff.wqe_sz = rq->mpwqe_stride_sz * rq->mpwqe_num_strides;
> +		byte_count = rq->buff.wqe_sz;
>  		rq->mkey_be = cpu_to_be32(c->priv->umr_mkey.key);
>  		err = mlx5e_rq_alloc_mpwqe_info(rq, c);
>  		if (err)
>  			goto err_rq_wq_destroy;
>  		break;
>  	default: /* MLX5_WQ_TYPE_LINKED_LIST */
> -		rq->skb = kzalloc_node(wq_sz * sizeof(*rq->skb), GFP_KERNEL,
> -				       cpu_to_node(c->cpu));
> -		if (!rq->skb) {
> +		rq->dma_info = kzalloc_node(wq_sz * sizeof(*rq->dma_info), GFP_KERNEL,
> +					    cpu_to_node(c->cpu));
> +		if (!rq->dma_info) {
>  			err = -ENOMEM;
>  			goto err_rq_wq_destroy;
>  		}
> +
>  		rq->handle_rx_cqe = mlx5e_handle_rx_cqe;
>  		rq->alloc_wqe = mlx5e_alloc_rx_wqe;
>  		rq->dealloc_wqe = mlx5e_dealloc_rx_wqe;
>  
> -		rq->wqe_sz = (priv->params.lro_en) ?
> +		rq->buff.wqe_sz = (priv->params.lro_en) ?
>  				priv->params.lro_wqe_sz :
>  				MLX5E_SW2HW_MTU(priv->netdev->mtu);
> -		rq->wqe_sz = SKB_DATA_ALIGN(rq->wqe_sz);
> -		byte_count = rq->wqe_sz;
> +		byte_count = rq->buff.wqe_sz;
> +
> +		/* calc the required page order */
> +		frag_sz = MLX5_RX_HEADROOM +
> +			  byte_count /* packet data */ +
> +			  SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> +		frag_sz = SKB_DATA_ALIGN(frag_sz);
> +
> +		npages = DIV_ROUND_UP(frag_sz, PAGE_SIZE);
> +		rq->buff.page_order = order_base_2(npages);
> +
>  		byte_count |= MLX5_HW_START_PADDING;
>  		rq->mkey_be = c->mkey_be;
>  	}
> @@ -502,7 +515,7 @@ static void mlx5e_destroy_rq(struct mlx5e_rq *rq)
>  		mlx5e_rq_free_mpwqe_info(rq);
>  		break;
>  	default: /* MLX5_WQ_TYPE_LINKED_LIST */
> -		kfree(rq->skb);
> +		kfree(rq->dma_info);
>  	}
>  
>  	for (i = rq->page_cache.head; i != rq->page_cache.tail;
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> index 8e02af3..2f5bc6f 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> @@ -179,50 +179,99 @@ unlock:
>  	mutex_unlock(&priv->state_lock);
>  }
>  
> -int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe, u16 ix)
> +#define RQ_PAGE_SIZE(rq) ((1 << rq->buff.page_order) << PAGE_SHIFT)
> +
> +static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq,
> +				      struct mlx5e_dma_info *dma_info)
>  {
> -	struct sk_buff *skb;
> -	dma_addr_t dma_addr;
> +	struct mlx5e_page_cache *cache = &rq->page_cache;
> +	u32 tail_next = (cache->tail + 1) & (MLX5E_CACHE_SIZE - 1);
>  
> -	skb = napi_alloc_skb(rq->cq.napi, rq->wqe_sz);
> -	if (unlikely(!skb))
> -		return -ENOMEM;
> +	if (tail_next == cache->head) {
> +		rq->stats.cache_full++;
> +		return false;
> +	}
> +
> +	cache->page_cache[cache->tail] = *dma_info;
> +	cache->tail = tail_next;
> +	return true;
> +}
> +
> +static inline bool mlx5e_rx_cache_get(struct mlx5e_rq *rq,
> +				      struct mlx5e_dma_info *dma_info)
> +{
> +	struct mlx5e_page_cache *cache = &rq->page_cache;
> +
> +	if (unlikely(cache->head == cache->tail)) {
> +		rq->stats.cache_empty++;
> +		return false;
> +	}
> +
> +	if (page_ref_count(cache->page_cache[cache->head].page) != 1) {
> +		rq->stats.cache_busy++;
> +		return false;
> +	}
> +
> +	*dma_info = cache->page_cache[cache->head];
> +	cache->head = (cache->head + 1) & (MLX5E_CACHE_SIZE - 1);
> +	rq->stats.cache_reuse++;
> +
> +	dma_sync_single_for_device(rq->pdev, dma_info->addr,
> +				   RQ_PAGE_SIZE(rq),
> +				   DMA_FROM_DEVICE);
> +	return true;
> +}
>  
> -	dma_addr = dma_map_single(rq->pdev,
> -				  /* hw start padding */
> -				  skb->data,
> -				  /* hw end padding */
> -				  rq->wqe_sz,
> -				  DMA_FROM_DEVICE);
> +static inline int mlx5e_page_alloc_mapped(struct mlx5e_rq *rq,
> +					  struct mlx5e_dma_info *dma_info)
> +{
> +	struct page *page;
>  
> -	if (unlikely(dma_mapping_error(rq->pdev, dma_addr)))
> -		goto err_free_skb;
> +	if (mlx5e_rx_cache_get(rq, dma_info))
> +		return 0;
>  
> -	*((dma_addr_t *)skb->cb) = dma_addr;
> -	wqe->data.addr = cpu_to_be64(dma_addr);
> +	page = dev_alloc_pages(rq->buff.page_order);
> +	if (unlikely(!page))
> +		return -ENOMEM;
>  
> -	rq->skb[ix] = skb;
> +	dma_info->page = page;
> +	dma_info->addr = dma_map_page(rq->pdev, page, 0,
> +				      RQ_PAGE_SIZE(rq), DMA_FROM_DEVICE);
> +	if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) {
> +		put_page(page);
> +		return -ENOMEM;
> +	}
>  
>  	return 0;
> +}
>  
> -err_free_skb:
> -	dev_kfree_skb(skb);
> +void mlx5e_page_release(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info,
> +			bool recycle)
> +{
> +	if (likely(recycle) && mlx5e_rx_cache_put(rq, dma_info))
> +		return;
> +
> +	dma_unmap_page(rq->pdev, dma_info->addr, RQ_PAGE_SIZE(rq),
> +		       DMA_FROM_DEVICE);
> +	put_page(dma_info->page);
> +}
> +
> +int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe, u16 ix)
> +{
> +	struct mlx5e_dma_info *di = &rq->dma_info[ix];
>  
> -	return -ENOMEM;
> +	if (unlikely(mlx5e_page_alloc_mapped(rq, di)))
> +		return -ENOMEM;
> +
> +	wqe->data.addr = cpu_to_be64(di->addr + MLX5_RX_HEADROOM);
> +	return 0;
>  }
>  
>  void mlx5e_dealloc_rx_wqe(struct mlx5e_rq *rq, u16 ix)
>  {
> -	struct sk_buff *skb = rq->skb[ix];
> +	struct mlx5e_dma_info *di = &rq->dma_info[ix];
>  
> -	if (skb) {
> -		rq->skb[ix] = NULL;
> -		dma_unmap_single(rq->pdev,
> -				 *((dma_addr_t *)skb->cb),
> -				 rq->wqe_sz,
> -				 DMA_FROM_DEVICE);
> -		dev_kfree_skb(skb);
> -	}
> +	mlx5e_page_release(rq, di, true);
>  }
>  
>  static inline int mlx5e_mpwqe_strides_per_page(struct mlx5e_rq *rq)
> @@ -305,79 +354,6 @@ static inline void mlx5e_post_umr_wqe(struct mlx5e_rq *rq, u16 ix)
>  	mlx5e_tx_notify_hw(sq, &wqe->ctrl, 0);
>  }
>  
> -static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq,
> -				      struct mlx5e_dma_info *dma_info)
> -{
> -	struct mlx5e_page_cache *cache = &rq->page_cache;
> -	u32 tail_next = (cache->tail + 1) & (MLX5E_CACHE_SIZE - 1);
> -
> -	if (tail_next == cache->head) {
> -		rq->stats.cache_full++;
> -		return false;
> -	}
> -
> -	cache->page_cache[cache->tail] = *dma_info;
> -	cache->tail = tail_next;
> -	return true;
> -}
> -
> -static inline bool mlx5e_rx_cache_get(struct mlx5e_rq *rq,
> -				      struct mlx5e_dma_info *dma_info)
> -{
> -	struct mlx5e_page_cache *cache = &rq->page_cache;
> -
> -	if (unlikely(cache->head == cache->tail)) {
> -		rq->stats.cache_empty++;
> -		return false;
> -	}
> -
> -	if (page_ref_count(cache->page_cache[cache->head].page) != 1) {
> -		rq->stats.cache_busy++;
> -		return false;
> -	}
> -
> -	*dma_info = cache->page_cache[cache->head];
> -	cache->head = (cache->head + 1) & (MLX5E_CACHE_SIZE - 1);
> -	rq->stats.cache_reuse++;
> -
> -	dma_sync_single_for_device(rq->pdev, dma_info->addr, PAGE_SIZE,
> -				   DMA_FROM_DEVICE);
> -	return true;
> -}
> -
> -static inline int mlx5e_page_alloc_mapped(struct mlx5e_rq *rq,
> -					  struct mlx5e_dma_info *dma_info)
> -{
> -	struct page *page;
> -
> -	if (mlx5e_rx_cache_get(rq, dma_info))
> -		return 0;
> -
> -	page = dev_alloc_page();
> -	if (unlikely(!page))
> -		return -ENOMEM;
> -
> -	dma_info->page = page;
> -	dma_info->addr = dma_map_page(rq->pdev, page, 0, PAGE_SIZE,
> -				      DMA_FROM_DEVICE);
> -	if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) {
> -		put_page(page);
> -		return -ENOMEM;
> -	}
> -
> -	return 0;
> -}
> -
> -void mlx5e_page_release(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info,
> -			bool recycle)
> -{
> -	if (likely(recycle) && mlx5e_rx_cache_put(rq, dma_info))
> -		return;
> -
> -	dma_unmap_page(rq->pdev, dma_info->addr, PAGE_SIZE, DMA_FROM_DEVICE);
> -	put_page(dma_info->page);
> -}
> -
>  static int mlx5e_alloc_rx_umr_mpwqe(struct mlx5e_rq *rq,
>  				    struct mlx5e_rx_wqe *wqe,
>  				    u16 ix)
> @@ -448,7 +424,7 @@ void mlx5e_post_rx_mpwqe(struct mlx5e_rq *rq)
>  	mlx5_wq_ll_update_db_record(wq);
>  }
>  
> -int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe,	u16 ix)
> +int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe, u16 ix)
>  {
>  	int err;
>  
> @@ -650,31 +626,46 @@ static inline void mlx5e_complete_rx_cqe(struct mlx5e_rq *rq,
>  
>  void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
>  {
> +	struct mlx5e_dma_info *di;
>  	struct mlx5e_rx_wqe *wqe;
> -	struct sk_buff *skb;
>  	__be16 wqe_counter_be;
> +	struct sk_buff *skb;
>  	u16 wqe_counter;
>  	u32 cqe_bcnt;
> +	void *va;
>  
>  	wqe_counter_be = cqe->wqe_counter;
>  	wqe_counter    = be16_to_cpu(wqe_counter_be);
>  	wqe            = mlx5_wq_ll_get_wqe(&rq->wq, wqe_counter);
> -	skb            = rq->skb[wqe_counter];
> -	prefetch(skb->data);
> -	rq->skb[wqe_counter] = NULL;
> +	di             = &rq->dma_info[wqe_counter];
> +	va             = page_address(di->page);
>  
> -	dma_unmap_single(rq->pdev,
> -			 *((dma_addr_t *)skb->cb),
> -			 rq->wqe_sz,
> -			 DMA_FROM_DEVICE);
> +	dma_sync_single_range_for_cpu(rq->pdev,
> +				      di->addr,
> +				      MLX5_RX_HEADROOM,
> +				      rq->buff.wqe_sz,
> +				      DMA_FROM_DEVICE);
> +	prefetch(va + MLX5_RX_HEADROOM);
>  
>  	if (unlikely((cqe->op_own >> 4) != MLX5_CQE_RESP_SEND)) {
>  		rq->stats.wqe_err++;
> -		dev_kfree_skb(skb);
> +		mlx5e_page_release(rq, di, true);
>  		goto wq_ll_pop;
>  	}
>  
> +	skb = build_skb(va, RQ_PAGE_SIZE(rq));
> +	if (unlikely(!skb)) {
> +		rq->stats.buff_alloc_err++;
> +		mlx5e_page_release(rq, di, true);
> +		goto wq_ll_pop;
> +	}
> +
> +	/* queue up for recycling ..*/
> +	page_ref_inc(di->page);
> +	mlx5e_page_release(rq, di, true);
> +
>  	cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
> +	skb_reserve(skb, MLX5_RX_HEADROOM);
>  	skb_put(skb, cqe_bcnt);
>  
>  	mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH RFC 01/11] net/mlx5e: Single flow order-0 pages for Striding RQ
  2016-09-07 19:18   ` [PATCH RFC 01/11] net/mlx5e: Single flow order-0 pages for Striding RQ Jesper Dangaard Brouer
@ 2016-09-15 14:28     ` Tariq Toukan
  0 siblings, 0 replies; 3+ messages in thread
From: Tariq Toukan @ 2016-09-15 14:28 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, Saeed Mahameed
  Cc: iovisor-dev, netdev, Brenden Blanco, Alexei Starovoitov,
	Tom Herbert, Martin KaFai Lau, Daniel Borkmann, Eric Dumazet,
	Jamal Hadi Salim, linux-mm

Hi Jesper,


On 07/09/2016 10:18 PM, Jesper Dangaard Brouer wrote:
> On Wed,  7 Sep 2016 15:42:22 +0300 Saeed Mahameed <saeedm@mellanox.com> wrote:
>
>> From: Tariq Toukan <tariqt@mellanox.com>
>>
>> To improve the memory consumption scheme, we omit the flow that
>> demands and splits high-order pages in Striding RQ, and stay
>> with a single Striding RQ flow that uses order-0 pages.
> Thanks you for doing this! MM-list people thanks you!
Thanks. I've just submitted it to net-next.
> For others to understand what this means:  This driver was doing
> split_page() on high-order pages (for Striding RQ).  This was really bad
> because it will cause fragmenting the page-allocator, and depleting the
> high-order pages available quickly.
>
> (I've left rest of patch intact below, if some MM people should be
> interested in looking at the changes).
>
> There is even a funny comment in split_page() relevant to this:
>
> /* [...]
>   * Note: this is probably too low level an operation for use in drivers.
>   * Please consult with lkml before using this in your driver.
>   */
>
>
>> Moving to fragmented memory allows the use of larger MPWQEs,
>> which reduces the number of UMR posts and filler CQEs.
>>
>> Moving to a single flow allows several optimizations that improve
>> performance, especially in production servers where we would
>> anyway fallback to order-0 allocations:
>> - inline functions that were called via function pointers.
>> - improve the UMR post process.
>>
>> This patch alone is expected to give a slight performance reduction.
>> However, the new memory scheme gives the possibility to use a page-cache
>> of a fair size, that doesn't inflate the memory footprint, which will
>> dramatically fix the reduction and even give a huge gain.
>>
>> We ran pktgen single-stream benchmarks, with iptables-raw-drop:
>>
>> Single stride, 64 bytes:
>> * 4,739,057 - baseline
>> * 4,749,550 - this patch
>> no reduction
>>
>> Larger packets, no page cross, 1024 bytes:
>> * 3,982,361 - baseline
>> * 3,845,682 - this patch
>> 3.5% reduction
>>
>> Larger packets, every 3rd packet crosses a page, 1500 bytes:
>> * 3,731,189 - baseline
>> * 3,579,414 - this patch
>> 4% reduction
>>
> Well, the reduction does not really matter than much, because your
> baseline benchmarks are from a freshly booted system, where you have
> not fragmented and depleted the high-order pages yet... ;-)
Indeed. On fragmented systems we'll get a gain, even w/o the page-cache 
mechanism, as no time is wasted looking for high-order-pages.
>
>
>> Fixes: 461017cb006a ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)")
>> Fixes: bc77b240b3c5 ("net/mlx5e: Add fragmented memory support for RX multi packet WQE")
>> Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
>> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
>> ---
>>   drivers/net/ethernet/mellanox/mlx5/core/en.h       |  54 ++--
>>   drivers/net/ethernet/mellanox/mlx5/core/en_main.c  | 136 ++++++++--
>>   drivers/net/ethernet/mellanox/mlx5/core/en_rx.c    | 292 ++++-----------------
>>   drivers/net/ethernet/mellanox/mlx5/core/en_stats.h |   4 -
>>   drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c  |   2 +-
>>   5 files changed, 184 insertions(+), 304 deletions(-)
>>
Regards,
Tariq

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-09-15 14:28 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1473252152-11379-1-git-send-email-saeedm@mellanox.com>
     [not found] ` <1473252152-11379-2-git-send-email-saeedm@mellanox.com>
2016-09-07 19:18   ` [PATCH RFC 01/11] net/mlx5e: Single flow order-0 pages for Striding RQ Jesper Dangaard Brouer
2016-09-15 14:28     ` Tariq Toukan
     [not found] ` <1473252152-11379-5-git-send-email-saeedm@mellanox.com>
2016-09-07 19:32   ` [PATCH RFC 04/11] net/mlx5e: Build RX SKB on demand Jesper Dangaard Brouer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox