[go: up one dir, main page]

|
|
Log in / Subscribe / Register

The "too small to fail" memory-allocation rule

The "too small to fail" memory-allocation rule

Posted Dec 25, 2014 22:33 UTC (Thu) by yoe (guest, #25743)
In reply to: The "too small to fail" memory-allocation rule by quotemstr
Parent article: The "too small to fail" memory-allocation rule

ZFS doesn't necessarily need its own cache, I suppose, but the algorithms involved can be more efficient if they have knowledge of the actual storage layout (which *does* require that they are part of the ZFS code, if done to the extreme). E.g., ZFS has the ability to store a second-level cache on SSDs; if the first-level (in-memory) cache needs to drop something, it may prefer to drop something which it knows to also be stored on the SSD cache over something which isn't. The global page cache doesn't have the intricate knowledge of the storage backend which is required for that sort of thing.

I suppose that's a step backwards if what you want is "mediocre cache performance, but similar performance for *all* file systems". That's not what ZFS is about, though; ZFS wants to provide excellent performance at all costs. That does mean it's not the best choice for all workloads, but it does beat the pants off of most other solutions in the workloads that it was meant for.


to post comments


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds