On 02/19/18 11:19, Mel Gorman wrote: > >> Index: linux/mm/page_alloc.c >> =================================================================== >> --- linux.orig/mm/page_alloc.c >> +++ linux/mm/page_alloc.c >> @@ -1844,7 +1844,12 @@ struct page *__rmqueue_smallest(struct z >> area = &(zone->free_area[current_order]); >> page = list_first_entry_or_null(&area->free_list[migratetype], >> struct page, lru); >> - if (!page) >> + /* >> + * Continue if no page is found or if our freelist contains >> + * less than the minimum pages of that order. In that case >> + * we better look for a different order. >> + */ >> + if (!page || area->nr_free < area->min) >> continue; >> list_del(&page->lru); >> rmv_page_order(page); > This is surprising to say the least. Assuming reservations are at order-3, > this would refuse to split order-3 even if there was sufficient reserved > pages at higher orders for a reserve. Hi Mel, I agree with you that the above code does not really do what it should. At least, the condition needs to be changed to: diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 76c9688b6a0a..193dfd85a6b1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1837,7 +1837,15 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, area = &(zone->free_area[current_order]); page = list_first_entry_or_null(&area->free_list[migratetype], struct page, lru); - if (!page) + /* + * Continue if no page is found or if we are about to + * split a truly higher order than requested. + * There is no limit for just _using_ exactly the right + * order. The limit is only for _splitting_ some + * higher order. + */ + if (!page || + (area->nr_free < area->min && current_order > order)) continue; list_del(&page->lru); rmv_page_order(page); The "&& current_order > order" part is _crucial_. If left out, it will work even counter-productive. I know this from development of my original patch some years ago. Please have a look at the attached patchset for kernel 3.16 which is in _production_ at 1&1 Internet SE at about 20,000 servers for several years now, starting from kernel 3.2.x to 3.16.x (or maybe the very first version was for 2.6.32, I don't remember exactly). It has collected several millions of operation hours in total, and it is known to work miracles for some of our workloads. Porting to later kernels should be relatively easy. Also notice that the switch labels at patch #2 could need some minor tweaking, e.g. also including ZONE_DMA32 or similar, and also might need some architecture-specific tweaking. All of the tweaking is depending on the actual workload. I am using it only at datacenter servers (webhosting) and at x86_64. Please notice that the user interface of my patchset is extremely simple and can be easily understood by junior sysadmins: After running your box for several days or weeks or even months (or possibly, after you just got an OOM), just do # cat /proc/sys/vm/perorder_statistics > /etc/defaults/my_perorder_reserve Then add a trivial startup script, e.g. to systemd or to sysv init etc, which just does the following early during the next reboot: # cat /etc/defaults/my_perorder_reserve > /proc/sys/vm/perorder_reserve That's it. No need for a deep understanding of the theory of the memory fragmentation problem. Also no need for adding anything to the boot commandline. Fragmentation will typically occur only after some days or weeks or months of operation, at least in all of the practical cases I have personally seen at 1&1 datacenters and their workloads. Please notice that fragmentation can be a very serious problem for operations if you are hurt by it. It can seriously harm your business. And it is _extremely_ specific to the actual workload, and to the hardware / chipset / etc. This is addressed by the above method of determining the right values from _actual_ operations (not from speculation) and then memoizing them. The attached patchset tries to be very simple, but in my practical experience it is a very effective practical solution. When requested, I can post the mathematical theory behind the patch, or I could give a presentation at some of the next conferences if I would be invited (or better give a practical explanation instead). But probably nobody on these lists wants to deal with any theories. Just _play_ with the patchset practically, and then you will notice. Cheers and greetings, Yours sincerly old-school hacker Thomas P.S. I cannot attend these lists full-time due to my workload at 1&1 which is unfortunately not designed for upstream hacking, so please stay patient with me if an answer takes a few days.