[go: up one dir, main page]

|
|
Log in / Subscribe / Register

What became of getrandom() in the vDSO

By Jonathan Corbet
July 25, 2024
In the previous episode of the vgetrandom() story, Jason Donenfeld had put together a version of the getrandom() system call that ran in user space, significantly improving performance for applications that need a lot of random data while retaining all of the guarantees provided by the system call. At that time, it seemed that a consensus had built around the implementation and that it was headed toward the mainline in that form. A few milliseconds after that article was posted, though, a Linus-Torvalds-shaped obstacle appeared in its path. That obstacle has been overcome and this work has now been merged for the 6.11 kernel, but its form has changed somewhat.

Torvalds initially rejected the idea of a vDSO implementation entirely, saying that there was no clear use case for it. At most, he said, the kernel should export a generation counter to inform user-space random-number generators that they should reseed themselves; anything beyond that, he said, was more than the kernel needed to provide. After a fair amount of back-and-forth with Donenfeld, who made the point that he did not want to expose the internal functioning of the kernel's random-number generator to user space, Torvalds reluctantly agreed to take another look and reconsider.

When he came back a couple of hours later, his biggest complaint centered around the new vgetrandom_alloc() system call, which was added to allocate the special memory needed to hold the per-thread state used by vgetrandom(). The ability to allocate memory that could be dropped by the kernel if needed had been requested before, he said, and could be useful in other settings. But it had been made available in a specialized form that was only suitable for vgetrandom(); that would lead developers to misuse that system call to allocate memory for other purposes. "And that nightmare has to be avoided".

His suggestion was that vgetrandom_alloc() should go away, and that the ability to allocate droppable memory should, instead, become just another mmap() flag. Donenfeld made that change, with the result that mmap() in 6.11 will support the MAP_DROPPABLE flag; that flag will be mandatory for memory allocated for use with vgetrandom(). Developers will be able to allocate droppable memory for other purposes, if desired, without having to use a special-purpose system call.

Allocating memory was only one of the reasons to call vgetrandom_alloc(), though; that call also informed the caller about how much memory was needed to hold the per-thread state. That information is now obtained with a special call to vgetrandom(), which retains the same prototype from previous attempts:

    ssize_t vgetrandom(void *buffer, size_t len, unsigned int flags,
                       void *opaque_state, size_t opaque_len);

If this function is called with the buffer, len, and flags parameters all set to zero and opaque_len set to ~0UL then, rather than generating random data, vgetrandom() will fill the memory pointed to by opaque_state with this structure:

    struct vgetrandom_opaque_params {
	__u32 size_of_opaque_state;
	__u32 mmap_prot;
	__u32 mmap_flags;
	__u32 reserved[13];
    };

The caller can then allocate the needed memory to store the per-thread state for as many threads as might be needed, passing the provided mmap_prot and mmap_flags values directly to mmap(). It is the caller's responsibility to ensure that the allocated memory does not cross a page boundary.

Once this memory has been allocated, user space can use vgetrandom() as described in the previous article — essentially as a drop-in replacement for the getrandom() system call. This should happen with no action needed by most developers, since it is expected that the C libraries will handle the allocation of the state memory and calls to the vDSO implementation. Application developers should get the newer, faster getrandom() for free with a library upgrade.

Index entries for this article
KernelRandom numbers
KernelSecurity/Random number generation


to post comments

Buried the lede

Posted Jul 25, 2024 21:05 UTC (Thu) by eatnumber1 (subscriber, #136670) [Link] (41 responses)

Buried the lede here. Droppable memory would be extremely useful in some applications!

Buried the lede

Posted Jul 25, 2024 21:27 UTC (Thu) by Paf (subscriber, #91811) [Link] (40 responses)

I would love to know more about what "droppable" means in practice!

Droppable memory

Posted Jul 25, 2024 21:33 UTC (Thu) by corbet (editor, #1) [Link] (38 responses)

The previous article describes the semantics of droppable memory. In short, it is anonymous (data) memory that can be dropped by the kernel when memory gets tight. So, rather than writing its contents to swap, the kernel will just forget about it. That will cause the next access to the memory to map in the zero page. This memory can be used as a sort of cache as long as the application recognizes when it has been reset.

Droppable memory

Posted Jul 25, 2024 22:58 UTC (Thu) by Paf (subscriber, #91811) [Link]

Ah, thank you for the summary! Interesting…

Droppable memory

Posted Jul 26, 2024 0:38 UTC (Fri) by rywang014 (subscriber, #167182) [Link] (23 responses)

How do the application know if the map is dropped, or the data it's accessing is indeed zero?

Droppable memory

Posted Jul 26, 2024 0:55 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (20 responses)

Set up a variable at the start of the page and initialize it to 1. At the end of the reading, check that it's still 1.

Droppable memory

Posted Jul 26, 2024 3:21 UTC (Fri) by abatters (✭ supporter ✭, #6932) [Link] (10 responses)

How long till the compiler writers declare any use of MAP_DROPPABLE to be UB?

Droppable memory

Posted Jul 26, 2024 4:49 UTC (Fri) by NYKevin (subscriber, #129325) [Link] (9 responses)

It already will be, unless it's behind a volatile pointer. But you should already be using volatile for any mmap pointer, unless you passed the appropriate flags to make mmap behave like malloc (at which point, you should probably just let your malloc implementation handle that detail for you, since nearly all nontrivial mallocs will use mmap in some capacity).

OTOH, if it is behind volatile, it's very unlikely that compilers will fiddle with it, because the whole point of volatile is to declare a "safe space" where compilers are not allowed to assume that memory behaves sanely.

Droppable memory

Posted Jul 26, 2024 12:30 UTC (Fri) by pizza (subscriber, #46) [Link] (6 responses)

> the whole point of volatile is to declare a "safe space" where compilers are not allowed to assume that memory behaves sanely.

..."sanely" in the "spherical cow" sense.

Droppable memory

Posted Jul 26, 2024 17:02 UTC (Fri) by NYKevin (subscriber, #129325) [Link] (5 responses)

I wouldn't say that. The standard is quite thorough. It basically says that memory accesses to volatile objects should be treated as I/O. That means no dead store elimination, redundant load elimination, reordering, or anything else that would change the observed sequence of memory loads and stores (for the same reason that the compiler is not allowed to optimize out multiple reads or writes at the same offset in some file). Just about the only guarantee it doesn't provide is atomicity (which is of course important, if you're writing multithreaded code, but it would be unreasonable overhead for something like single-threaded mmap, if we had to constantly emit full memory barriers everywhere).

More volatile.

Posted Jul 26, 2024 19:57 UTC (Fri) by rweikusat2 (subscriber, #117920) [Link] (2 responses)

Assuming the following object definitions:

volatile int a, b;
int c;

and this statement:

c = a + b;

it's perfectly acceptable for the compiler to generate the loads accessing the values of a and b in any order. There's no sequence point inside this expression and hence, there's no ordering requirement for accces to a versus access to b. The only requirement is that all side effects happen (access a, access b, store to c) and will be completed at the end of the statement (which is a sequence point).

If a and b hadn't been declared as volatile, access to either a or b or both, ie, loads from the memory holding these objects, could be omitted if the corresponding values can be determined in another way, eg, reuse something already loaded into a register at an earlier time.

More volatile.

Posted Jul 27, 2024 1:50 UTC (Sat) by NYKevin (subscriber, #129325) [Link] (1 responses)

That's not reordering. That's "you didn't ask for any ordering in the first place." C is not e.g. C# (which does specify that expressions are evaluated from left to right).

More volatile.

Posted Jul 30, 2024 14:55 UTC (Tue) by rweikusat2 (subscriber, #117920) [Link]

Sorry to be nitpicking but the compiler can only reorder evaluation of subexpressions if the order of evalutation of subexpression isn't defined. Otherwise, it must not do so. C doesn't define a subexpression evaluation order for most operators and declaring objects as volatile doesn't change that. It just demands that the number of object accesses at runtime must be same as the number of object accesses a naive translation the source code would require.

To use another example, assuming

volatile unsigned a;
unsigned b;

b = a + a;

then, the generated code must actually load a twice instead of employing the arithmetically equivalent and probably faster alternative to load a once, shift the value left by one and store the result in b.

Droppable memory

Posted Jul 26, 2024 23:00 UTC (Fri) by roc (subscriber, #30627) [Link] (1 responses)

FWIW atomicity and memory barriers are orthogonal.

Making volatile memory accesses relaxed-atomic would actually make some amount of sense.

Droppable memory

Posted Jul 28, 2024 15:36 UTC (Sun) by foom (subscriber, #14868) [Link]

It might make some sense, but one difference between relaxed atomic and volatile is that if a volatile access is too large or not appropriately aligned (such that it cannot be executed atomically on the hardware), it will be split up into multiple accesses, and executed non-atomically.

You might say volatile accesses which cannot be a compiled to a single memory operation ought to be forbidden. That is a reasonable position, but would break lots of software which is doing it.

On the other hand, for a relaxed atomic in those circumstances, the compiler will add a lock around the memory accesses to ensure atomicity is retained despite the multiple operations. But note that a lock will only make it atomic w.r.t. other atomic accesses in other threads of the same process -- NOT cross process, NOT to memory mapped devices, and NOT safe w.r.t. asynchronous signal handlers.

So, adding locks to currently-non-atomic volatile accesses would break anyone using it in a signal handler, and be at best useless when using volatile for it's intended purpose of communicating with hardware. So that doesn't seem like a good idea either.

Volatile

Posted Jul 26, 2024 17:09 UTC (Fri) by rweikusat2 (subscriber, #117920) [Link] (1 responses)

The only defined meaning of volatile is that accessing an object labelled as volatile is to be regarded as producing a required side effect as opposed to only writing to an object.

Volatile

Posted Jul 26, 2024 19:59 UTC (Fri) by NYKevin (subscriber, #129325) [Link]

But that's enough to block nearly all optimizations involving memory operations. The memory operations are strongly bundled together with the required side effect, so the compiler is not allowed to disentangle them and reason separately about what the memory is doing. You could, for example, mark the memory as non-accessible and manually emulate reads and writes in the fault handler (producing and consuming the values in whatever manner you like, regardless of whether it fits the usual semantics of memory, or even makes any logical sense whatsoever), and volatile provides the necessary guarantees for such a setup.

Yes, that's a stupid and non-performant thing to do, but the point is that the compiler is not allowed to meddle with it.

Droppable memory

Posted Jul 26, 2024 7:48 UTC (Fri) by epa (subscriber, #39769) [Link] (8 responses)

That’s kind of annoying though. You can store only 4095 usable bytes in each 4096 byte page. It would be nicer to have a way to check (without race conditions) that it’s still mapped. Or have access to a disappeared page generate some kind of signal, less severe than the normal segmentation fault, which the process can catch.

Droppable memory

Posted Jul 26, 2024 9:18 UTC (Fri) by josh (subscriber, #17465) [Link] (3 responses)

Losing one byte (or bit) out of every page doesn't seem like the biggest problem. I think a bigger problem is that you can't *directly* store pointers in them to memory you don't have pointers to elsewhere. So, for instance, it'd be an interesting challenge to design a hash bucket around this that doesn't leak memory; you could use this for pointed-to values quite easily, but it'd be harder to use this for the pointers.

Droppable memory

Posted Jul 27, 2024 9:51 UTC (Sat) by epa (subscriber, #39769) [Link] (2 responses)

I think losing one bit every 4k limits the usefulness a lot. Suppose you have a 100 kilobyte temporary object you want to cache (but don’t mind losing it if memory is tight). You have to carefully split it into chunks of just under 4096 bytes and reconstruct it, if you want to use the first bit of each page as a flag.

You could instead check for constant zero pages, but then you can’t cache things that contain a large expanse of zeroes.

And in any case you’d need to copy the memory first before you check. You can’t check first and then use it, since it might be freed meanwhile.

Signals are racy, but might be the less bad option.

Droppable memory

Posted Jul 28, 2024 8:47 UTC (Sun) by smurf (subscriber, #17840) [Link] (1 responses)

Most objects tend to not have all-zero records. It's easy enough to e.g. set up an enum to start with 1 instead of zero, thus you lose at most 1/2^8'th bit.

I do wonder whether this droppable page thing shouldn't be a droppable folio instead, i.e. all-or-nothing.

Droppable memory

Posted Jul 28, 2024 19:29 UTC (Sun) by epa (subscriber, #39769) [Link]

I meant when the object does not fit in a single page. It’s a pain to make sure that every 4096th byte is nonzero, or whatever. Your suggestion of dropping a whole folio would fix this.

Droppable memory

Posted Jul 26, 2024 20:16 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

Signals are always going to be racy, so it's not a solution. But dropping all the mmap() allocation at once seems to be a good compromise that requires only a handful of lines of code.

Droppable memory

Posted Jul 26, 2024 23:05 UTC (Fri) by roc (subscriber, #30627) [Link] (2 responses)

What would be really nice is to make the first word of the droppable region be a futex and the kernel does a try-lock on the futex before dropping the memory. If the try-lock fails the memory is not dropped, otherwise the kernel drops the memory and finally drops the first page, releasing the futex and waking any waiters.

That gives user-space an efficient way to avoid having to deal with "any value in memory can become zero at any time".

Droppable memory

Posted Jul 29, 2024 2:46 UTC (Mon) by NYKevin (subscriber, #129325) [Link] (1 responses)

You can do that with MADV_FREE right now. It's moderately more work, but it is definitely possible to do.

Droppable memory

Posted Jul 29, 2024 3:15 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

MADV_FREE is pretty slow (because it has to grab a very hot mmap semaphore), if you have to do it often.

Droppable memory

Posted Jul 26, 2024 14:26 UTC (Fri) by IAmLiterallyABee (subscriber, #144892) [Link] (1 responses)

Android used to have a kernel feature called ASHMEM. Similar to memfd, you get a file descriptor referring to anonymous memory.
Except ASHMEM had the concept of "pinning" and unpinning. You could unpin memory, and the kernel could drop it whenever there was memory pressure. When you want to use the memory, you pin it via ioctl, and the kernel returns a flag saying if the page is still intact. While the page is pinned you can use it without fear of it disappearing beneath your feet (like I believe it could with VM_DROPPABLE)

Droppable memory

Posted Jul 26, 2024 15:59 UTC (Fri) by KJ7RRV (subscriber, #153595) [Link]

It is, or at least used to be, possible to use ASHMEM on regular Linux too; Waydroid (https://lwn.net/Articles/901459/) required it in the past.

Droppable memory

Posted Jul 26, 2024 1:01 UTC (Fri) by ms-tg (subscriber, #89231) [Link] (5 responses)

Sorry if this obvious but just to make sure I understand: the user space caller is obligated to detect that the droppable memory has been dropped and the zero page mapped in.

In practice, what does this look like? Every allocated structure in droppable memory must have a test bit, which is 1 unless the kernel has dropped the memory? And at the beginning of every block of code which access the structure, one must check that bit, and re-initialize if the bit is zero?

Droppable memory

Posted Jul 26, 2024 1:02 UTC (Fri) by ms-tg (subscriber, #89231) [Link] (3 responses)

Never mind, I see you responded above that one must check for the 1 and both the beginning and end of an interaction with the memory, or else presumably retry after re-initializing.

Droppable memory

Posted Jul 26, 2024 4:53 UTC (Fri) by NYKevin (subscriber, #129325) [Link] (2 responses)

Technically, checking at the beginning is both unnecessary and unhelpful, because you still race with the drop. Your thread can get preempted at any time, and the memory paged out from under you, even after you have checked that the memory is valid (but before you have finished consuming it). You just have to write robust code and check for failure at the end.

Droppable memory

Posted Jul 26, 2024 5:35 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

The grandparent probably meant that checking at the beginning is a nice optimization, so you won't waste time trying to read a zero page.

Droppable memory

Posted Jul 26, 2024 8:25 UTC (Fri) by epa (subscriber, #39769) [Link]

Or you temporarily mark it as in-droppable before you use it and put it back when you’re done. The kernel could provide a kind of handle which indicates you will reference a certain range of memory. The pages in that range can’t be dropped while the handle is held, even if they are marked droppable. Later you release the handle. This wouldn’t help much for random number generation (where the aim is to have close to zero system calls) but it would be a race-free way for applications to use “cache” memory for other purposes.

Droppable memory

Posted Jul 26, 2024 13:00 UTC (Fri) by Wol (subscriber, #4433) [Link]

> In practice, what does this look like? Every allocated structure in droppable memory must have a test bit, which is 1 unless the kernel has dropped the memory?

THINK OUTSIDE THE BOX!

Didn't it say, if the memory gets dropped, it gets replaced by an all-zeros page on the next access?

So while you're parsing the memory, if you get an unexpected zero the memory has been dropped.

If there's an area of memory that cannot be 0s, just check when you've finished that it's still non-zero. Provided you know, that somewhere in that region, there's an area that must have valid data (a version structure for example), just check that that data is still there.

Cheers,
Wol

Droppable memory

Posted Jul 26, 2024 10:53 UTC (Fri) by fweimer (guest, #111581) [Link] (6 responses)

I still quite don't see how this is different from MADV_FREE. Is the difference that it's sticky and is preserved after writes? MADV_FREE memory reverts to regular memory upon write.

Droppable memory

Posted Jul 26, 2024 11:13 UTC (Fri) by david.hildenbrand (subscriber, #108299) [Link] (5 responses)

Yes, it's sticky. After re-dirtying the page (IOW writing to it) you won't have to call MADV_FREE again.

Droppable memory

Posted Jul 26, 2024 11:31 UTC (Fri) by fweimer (guest, #111581) [Link] (4 responses)

And VM_DROPPABLE is not? If it's sticky as well, I really don't see the difference …

Droppable memory

Posted Jul 26, 2024 13:11 UTC (Fri) by barryascott (subscriber, #80640) [Link] (3 responses)

> And VM_DROPPABLE is not? If it's sticky as well, I really don't see the difference …

The advantage is that the kernel can reclaim the memory when it’s under memory pressure.
The page does not need to be written to backing store swap etc.
So the reclaim is instant.

Any cached data will just be recached when the app detects the cache is zeroed.

For the vgetrandom use case the kernel drops the page anytime the slow path must be used to get random data.

Droppable memory vs MADV_FREE

Posted Jul 26, 2024 13:18 UTC (Fri) by fweimer (guest, #111581) [Link] (2 responses)

I still don't get how this is different from MADV_FREE. As far as I know, it has all the same properties.

Droppable memory vs MADV_FREE

Posted Jul 26, 2024 13:32 UTC (Fri) by farnz (subscriber, #17727) [Link] (1 responses)

Droppable memory can be dropped at any time. MADV_FREEd memory can be dropped up until the next time the page is written to; writing to a page cancels out the effect of MADV_FREE.

Semantically, droppable memory is the same as calling MADV_FREE on a page after every write to that page, but without the overhead of a syscall after each write.

Droppable memory vs MADV_FREE

Posted Jul 26, 2024 13:58 UTC (Fri) by fweimer (guest, #111581) [Link]

Got it now, thanks. My initial question confused me.

Buried the lede

Posted Aug 15, 2024 7:44 UTC (Thu) by ksandstr (guest, #60862) [Link]

It means userspace memory that the kernel may replace by losing data, like clean page cache pages. This could be useful in e.g. userspace caches of remote cluster resources, which might more efficiently ride the kernel's replacement policy for the latter's privileged access to page table a/d bits and doing better system-wide replacement already. Unilateral zeroing can be detected by storing the position of the last nonzero byte and checking it nonzero after copying out.

It'd also be an architectural mistake to use kernel replacement in this way for anything that's more expensive to recompute than, or shaped differently from, block data. Preemption would further limit such a feature's usage to VMs, databases, and other system software that caches raw storage under a non-POSIX abstraction.

Sentinel location

Posted Jul 25, 2024 22:48 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

It'd be nice if it were possible to associate a location with a set of pages, so that location is zeroed out if any of the pages is dropped.

This will make it easy to detect if any of the cached data is invalidated. It's possible with the current semantics, but you have to put a sentinel into each page. Not optimal.

Sentinel location

Posted Jul 26, 2024 0:08 UTC (Fri) by josh (subscriber, #17465) [Link] (2 responses)

One way to solve that would be a flag for "if you want to drop any of these pages, you must drop the first page".

That said, for most applications I want to use this for, I don't want it to be all-or-nothing, and it doesn't seem like too much of an imposition to have a flag in each page.

Sentinel location

Posted Jul 26, 2024 0:46 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

That's actually a pretty good solution. And apps can create multiple mappings, so it's not such a big limitation. But it'll allow caching of objects that are larger than the page size.

Sentinel location

Posted Jul 26, 2024 3:31 UTC (Fri) by pj (subscriber, #4506) [Link]

Isn't "not all-or-nothing" the same as multiple droppable allocations? I guess the latter doesn't guarantee contiguity, but maybe that's the price you pay to get granular droppability.

use case?

Posted Jul 26, 2024 3:03 UTC (Fri) by lwnuser573 (guest, #134940) [Link] (4 responses)

Did we discover the use case(s) for the really high speed random?

use case?

Posted Jul 26, 2024 9:21 UTC (Fri) by LondonPrentice (guest, #172410) [Link]

The use case is to prevent C libraries (specifically the GNU one) from implementing an another random number generator in the user space just because someone might complain that 'but getrandom is too slow'.

use case?

Posted Jul 26, 2024 10:14 UTC (Fri) by fishface60 (subscriber, #88700) [Link]

From my reading of the discussion I don't remember exactly which use-cases require frequently calculating random numbers, but the existence of a range of high performance userland random number generators implies the use-case exists.

The need for kernel intervention is that the kernel is best positioned to know when it has been hibernated or is a resumed VM snapshot, and communicating this to userland is tricky (and I think attempts to get an interface for this merged met resistance).

I think one of the reasons why the original had its own special mmap interface was so that the page could be dropped for any future circumstances where you'd want to reset random number generation too, which is distinct from just being safe to drop, but Linus objected because people would misuse it for droppable mmaps in the absence of that feature being merged, so it was reworked to depend on that feature instead.

This potentially leaves it with an inconvenient API that conflates "it's safe to drop this" with "I want you to drop this when the random number generator must be reseeded" but I suppose it's working on the assumption that every time random numbers must be reseeded is also a good opportunity to drop nonessential memory.

As for why it has to be getrandom(), I think the argument is that while userspace RNGs could make use of the droppable mmap to identify when to reseed it's apparently asking a lot for every RNG implementation to get that right, and the kernel can evolve when it needs to know when to reseed independently, and there's apparently interest in having something able to use the same algorithm as the kernel.

use case?

Posted Jul 27, 2024 0:49 UTC (Sat) by mina86 (guest, #68442) [Link]

The use case is any application that needs cryptographic random number generator. But can’t they seed a CPRNG once and use that? Normally they could except in case of virtual machine cloning and migrations where two copies of the process could end up with the same random state. vDSO getrandom guarantees that the application reseeds the CPRNG whenever this happens.

use case?

Posted Jul 27, 2024 2:42 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

For example, we're using hash maps with random seeds to make sure they can't be used for a DDoS. We need almost a million seeds per second for some cases, and the syscall overhead is significant. So we have a userspace RNG, seeded with the kernel's entropy source. With this patch, we'll be able to remove all of it and just use the regular libc for random numbers.

Een bussie

Posted Jul 26, 2024 9:13 UTC (Fri) by LondonPrentice (guest, #172410) [Link] (6 responses)

> A few milliseconds after that article was posted, though, a Linus-Torvalds-shaped obstacle appeared in its path. That obstacle has been overcome and this work has now been merged for the 6.11 kernel, but its form has changed somewhat.

Where is that bus everybody is talking about?

Because an absurd overloading of the 'vgetrandom' system call with some special semantics doesn't strike me as an improvement over a dedicated system call for the same purpose. It's the reverse.

Politeness

Posted Jul 26, 2024 13:29 UTC (Fri) by daroc (editor, #160859) [Link] (1 responses)

I don't think joking about the death of an open-source maintainer (the bus from a project's "bus factor") is something we want to encourage in the LWN comments. Discussing the potential problems with a project's leadership structure is on topic, joking about their deaths is not.

Politeness

Posted Jul 26, 2024 21:02 UTC (Fri) by LondonPrentice (guest, #172410) [Link]

> Discussing the potential problems with a project's leadership structure is on topic, joking about their deaths is not.

I just think Torvalds should just trust his maintainers more. Because the whole situation felt like less competent superior lecturing a subordinate who has much more expertise than him. And Donenfeld is not some random person that got into the Linux development just because of his enthusiasm, he's somebody who was apparently the sole person that realised how much half-baked idea 'arc4random' was and prevented its inclusion (in its original form, unfortunately not completely) in glibc at the last minute!

Comparing system calls

Posted Jul 26, 2024 13:37 UTC (Fri) by corbet (editor, #1) [Link] (2 responses)

vgetrandom() has been overloaded with a special mode that tells callers how to use vgetrandom(). It's mildly strange but is quite focused.

vgetrandom_alloc() was a special-purpose memory-allocation system call that had a high potential to be used in unintended ways. It is not the same thing at all. Moving the memory allocation back into mmap(), which is how processes allocate memory in general, makes a lot more sense.

Comparing system calls

Posted Jul 26, 2024 20:48 UTC (Fri) by LondonPrentice (guest, #172410) [Link] (1 responses)

> It is not the same thing at all.

Oh, right, I've just re-read the article more closely, and it is way worse. Instead of allocating the memory itself, the call just... returns the parameters that you have to call 'mmap' with. I guess somebody read 'teach man how to fish' and got wrong conclusions from that.

Comparing system calls

Posted Jul 29, 2024 10:24 UTC (Mon) by ianmcc (guest, #88379) [Link]

Not sure what you mean here. When you have a function that does some computation and writes to an output buffer, it is a very common technique to have a special mode that means "don't do the actual computation, just tell me how big the output buffer needs to be". There is no point bloating the API with a separate function to query the buffer size, and because almost all of the parameters are going to be the same between the two calls it is fairly robust.

Een bussie

Posted Jul 26, 2024 17:19 UTC (Fri) by brchrisman (subscriber, #71769) [Link]

I think the comment played on the joke in the article and meant "It's an inanimate object in the shape of a person.... we need a bus to remove it." I don't think the comment intended to suggest harm to a person.

glibc patch posted

Posted Jul 30, 2024 14:55 UTC (Tue) by zx2c4 (subscriber, #82519) [Link]


Copyright © 2024, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds