* Re: [PATCH] mm/gup: allow CMA migration to propagate errors back to caller
2019-10-21 15:17 [PATCH] mm/gup: allow CMA migration to propagate errors back to caller zhong jiang
@ 2019-10-21 15:40 ` Vlastimil Babka
2019-10-21 17:27 ` John Hubbard
2019-10-21 18:25 ` Ira Weiny
2 siblings, 0 replies; 4+ messages in thread
From: Vlastimil Babka @ 2019-10-21 15:40 UTC (permalink / raw)
To: zhong jiang, akpm; +Cc: jhubbard, linux-mm
On 10/21/19 5:17 PM, zhong jiang wrote:
> check_and_migrate_cma_pages() was recording the result of
> __get_user_pages_locked() in an unsigned "nr_pages" variable. Because
> __get_user_pages_locked() returns a signed value that can include
> negative errno values, this had the effect of hiding errors.
>
> Change check_and_migrate_cma_pages() implementation so that it
> uses a signed variable instead, and propagates the results back
> to the caller just as other gup internal functions do.
>
> This was discovered with the help of unsigned_lesser_than_zero.cocci.
>
> Suggested-by: John Hubbard <jhubbard@nvidia.com>
> Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/gup.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index 8f236a3..c2b3e11 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> bool drain_allow = true;
> bool migrate_allow = true;
> LIST_HEAD(cma_page_list);
> + long ret = nr_pages;
>
> check_again:
> for (i = 0; i < nr_pages;) {
> @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> * again migrating any new CMA pages which we failed to isolate
> * earlier.
> */
> - nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages,
> + ret = __get_user_pages_locked(tsk, mm, start, nr_pages,
> pages, vmas, NULL,
> gup_flags);
>
> - if ((nr_pages > 0) && migrate_allow) {
> + if ((ret > 0) && migrate_allow) {
> + nr_pages = ret;
> drain_allow = true;
> goto check_again;
> }
> }
>
> - return nr_pages;
> + return ret;
> }
> #else
> static long check_and_migrate_cma_pages(struct task_struct *tsk,
>
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH] mm/gup: allow CMA migration to propagate errors back to caller
2019-10-21 15:17 [PATCH] mm/gup: allow CMA migration to propagate errors back to caller zhong jiang
2019-10-21 15:40 ` Vlastimil Babka
@ 2019-10-21 17:27 ` John Hubbard
2019-10-21 18:25 ` Ira Weiny
2 siblings, 0 replies; 4+ messages in thread
From: John Hubbard @ 2019-10-21 17:27 UTC (permalink / raw)
To: zhong jiang, akpm; +Cc: vbabka, linux-mm
On 10/21/19 8:17 AM, zhong jiang wrote:
> check_and_migrate_cma_pages() was recording the result of
> __get_user_pages_locked() in an unsigned "nr_pages" variable. Because
> __get_user_pages_locked() returns a signed value that can include
> negative errno values, this had the effect of hiding errors.
>
> Change check_and_migrate_cma_pages() implementation so that it
> uses a signed variable instead, and propagates the results back
> to the caller just as other gup internal functions do.
>
> This was discovered with the help of unsigned_lesser_than_zero.cocci.
>
> Suggested-by: John Hubbard <jhubbard@nvidia.com>
> Signed-off-by: zhong jiang <zhongjiang@huawei.com>
> ---
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
thanks,
John Hubbard
NVIDIA
> mm/gup.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index 8f236a3..c2b3e11 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> bool drain_allow = true;
> bool migrate_allow = true;
> LIST_HEAD(cma_page_list);
> + long ret = nr_pages;
>
> check_again:
> for (i = 0; i < nr_pages;) {
> @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> * again migrating any new CMA pages which we failed to isolate
> * earlier.
> */
> - nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages,
> + ret = __get_user_pages_locked(tsk, mm, start, nr_pages,
> pages, vmas, NULL,
> gup_flags);
>
> - if ((nr_pages > 0) && migrate_allow) {
> + if ((ret > 0) && migrate_allow) {
> + nr_pages = ret;
> drain_allow = true;
> goto check_again;
> }
> }
>
> - return nr_pages;
> + return ret;
> }
> #else
> static long check_and_migrate_cma_pages(struct task_struct *tsk,
>
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH] mm/gup: allow CMA migration to propagate errors back to caller
2019-10-21 15:17 [PATCH] mm/gup: allow CMA migration to propagate errors back to caller zhong jiang
2019-10-21 15:40 ` Vlastimil Babka
2019-10-21 17:27 ` John Hubbard
@ 2019-10-21 18:25 ` Ira Weiny
2 siblings, 0 replies; 4+ messages in thread
From: Ira Weiny @ 2019-10-21 18:25 UTC (permalink / raw)
To: zhong jiang; +Cc: akpm, jhubbard, vbabka, linux-mm
On Mon, Oct 21, 2019 at 11:17:10PM +0800, zhong jiang wrote:
> check_and_migrate_cma_pages() was recording the result of
> __get_user_pages_locked() in an unsigned "nr_pages" variable. Because
> __get_user_pages_locked() returns a signed value that can include
> negative errno values, this had the effect of hiding errors.
>
> Change check_and_migrate_cma_pages() implementation so that it
> uses a signed variable instead, and propagates the results back
> to the caller just as other gup internal functions do.
>
> This was discovered with the help of unsigned_lesser_than_zero.cocci.
>
> Suggested-by: John Hubbard <jhubbard@nvidia.com>
> Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> ---
> mm/gup.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index 8f236a3..c2b3e11 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> bool drain_allow = true;
> bool migrate_allow = true;
> LIST_HEAD(cma_page_list);
> + long ret = nr_pages;
>
> check_again:
> for (i = 0; i < nr_pages;) {
> @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> * again migrating any new CMA pages which we failed to isolate
> * earlier.
> */
> - nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages,
> + ret = __get_user_pages_locked(tsk, mm, start, nr_pages,
> pages, vmas, NULL,
> gup_flags);
>
> - if ((nr_pages > 0) && migrate_allow) {
> + if ((ret > 0) && migrate_allow) {
> + nr_pages = ret;
> drain_allow = true;
> goto check_again;
> }
> }
>
> - return nr_pages;
> + return ret;
> }
> #else
> static long check_and_migrate_cma_pages(struct task_struct *tsk,
> --
> 1.7.12.4
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread