[go: up one dir, main page]

|
|
Log in / Subscribe / Register

The "too small to fail" memory-allocation rule

The "too small to fail" memory-allocation rule

Posted Dec 23, 2014 23:41 UTC (Tue) by xorbe (guest, #3165)
Parent article: The "too small to fail" memory-allocation rule

An important piece of code such as XFS should reserve the minimal needed memory on driver load. Then, it should have local management of its own resources. It could have optional dynamic reservation for non-critical purposes, such as caching, where it doesn't matter if the data is dropped (only a performance impact).


to post comments

The "too small to fail" memory-allocation rule

Posted Dec 24, 2014 0:54 UTC (Wed) by cwillu (guest, #67268) [Link] (1 responses)

The problem isn't that xfs can't deal with the memory allocation failing; the problem is that the allocation routines don't give it an opportunity to handle a failure before they've already gone ahead and tried to kill a process (at which point they've deadlocked).

The "too small to fail" memory-allocation rule

Posted Dec 24, 2014 7:35 UTC (Wed) by epa (subscriber, #39769) [Link]

I think the OP's point may have been that ideally cleanup tasks like killing a process should not require fresh allocations. Sufficient memory should be allocated in advance, so you can always close files (etc) without allocating.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds