[go: up one dir, main page]

Hacker Newsnew | past | comments | ask | show | jobs | submit | aliceryhl's commentslogin

Thank you. I was wondering about that.

Hmm, I read some of the decision, and now I'm not sure what to make of all of it.

When I came to the opinion from Jackson, J., I found it extremely compelling. He says this:

... But some of TWEA’s sections delegating this authority had lapsed, and “there [was] doubt as to the effectiveness of other sections.” Accordingly, Congress amended TWEA in 1941, adding the subsection that includes the “regulate ... importation” language on which the President relies today. The Reports explained Congress’s primary purpose for the 1941 amendment: shoring up the President’s ability to control foreign-owned property by maintaining and strengthening the “existing system of foreign property control (commonly known as freezing control).”

When Congress enacted IEEPA in 1977, limiting the circumstances under which the President could exercise his emergency authorities, it kept the “regulate ... importation” language from TWEA. The other two relevant pieces of legislative history—the Senate and House Reports that accompanied IEEPA—demonstrate that Congress’s intent regarding the scope of this statutory language remained the same. As the Senate Report explained, Congress’s sole objective for the “regulate ... importation” subsection was to grant the President the emergency authority “to control or freeze property transactions where a foreign interest is involved.” The House Report likewise described IEEPA as empowering the President to “regulate or freeze any property in which any foreign country or a national thereof has any interest.”

However, then I read Kavanaugh, J. who writes the following:

In 1971, President Nixon imposed 10 percent tariffs on almost all foreign imports. He levied the tariffs under IEEPA’s predecessor statute, the Trading with the Enemy Act (TWEA), which similarly authorized the President to “regulate ... importation.” The Nixon tariffs were upheld in court.

When IEEPA was enacted in 1977 in the wake of the Nixon and Ford tariffs and the Algonquin decision, Congress and the public plainly would have understood that the power to “regulate ... importation” included tariffs. If Congress wanted to exclude tariffs from IEEPA, it surely would not have enacted the same broad “regulate ... importation” language that had just been used to justify major American tariffs on foreign imports.

And I also find this compelling.

To add onto this, Roberts, C. J. says: IEEPA’s grant of authority to “regulate ... importation” falls short. IEEPA contains no reference to tariffs or duties. The Government points to no statute in which Congress used the word “regulate” to authorize taxation. And until now no President has read IEEPA to confer such power.

This seems directly contradictory to Kavanaugh, J.'s dissent! Kavanaugh, J. claims that Nixon used the word “regulate” to impose tarrifs. And apparently the word isn't just in some random other statute — Nixon did so from TWEA, the predecessor of IEEPA: when Congress enacted IEEPA in 1977 it kept the “regulate ... importation” language from TWEA. (from Jackson, J.) So the point that no President has read IEEPA to confer such power seems pretty weak, when Nixon apparently did so from TWEA.

I have no conclusion from this, but IMO both Jackson, J. and Kavanaugh, J. have pretty strong points in opposing directions.


Kavanaugh’s reasoning is that a wartime law, TWEA, can be congruent to a peacetime law, IEEPA. The rest of the court acknowledged that the President always had control of tariffs during war.

Jackson is a woman just fyi.

Ah thanks for clarifying.

I asked about this when they presented the project at the Linux Plumbers conference. They replied that it's not really intended to be a security boundary, and that you should not let anyone malicious load these programs.

Given this thread model, I think their project is entirely reasonable. Safe Rust will prevent accidental mistakes even if you could technically circumvent it if you really try.


eBPF's limitations are as much about reliability as security. The bounded loop restriction, for instance, prevents eBPF programs from locking up your machine.


You could still imagine terminating these programs after some bounded time or cycle count. It isn't as good as static verification, but it's certainly more flexible.


If you're doing this kind of "optimistic" reliability story, where developers who stay on the happy path are unlikely to cause any real problems, I don't get what the value of something like this is over just doing a normal Rust LKM that isn't locked into a specific set of helpers.


You can extend the kernel functionality without having to develop a whole kernel module? Just because your module has no memory errors does not mean that it is working as intended.

Further, if you want to hook into specific parts of the kernel, you might well end up writing far more boilerplate instead of just intercepting the one call you're actually interested in and adding some metadata or doing some access control.

I personally am all for a kernel that can do more things for more people with less bespoke kernel modules or patches.


I guess my point is that the delta between a "whole kernel module" and a "Rex extension" is pretty small.


if nothing else, rex makes a good central place to evolve a set of helper code for doing ebpf-like stuff in a rust kernel module. wouldn't be too surprised if it eventually becomes closer to an embedded dsl.


Sure! Can't disagree with that.


As I understand it eBPF has also given up on that due to Spectre. As a result you need root to use it on most distros anyway, and the kernel devs aren't going to expand its use (some systems are stuck on cBPF).

So it's not like eBPF is secure and this isn't. They're both insecure in different ways.


So eBPF for a WAF isn't worth it?

re: eBPF and WAFs: https://news.ycombinator.com/item?id=45951011

From https://news.ycombinator.com/context?id=43564972 :

> Should a microkernel implement eBPF and WASM, or, for the same reasons that justify a microkernel should eBPF and most other things be confined or relegated or segregated in userspace; in terms of microkernel goals like separation of concerns and least privilege and then performance?

"Isolated Execution Environment for eBPF" (2025-04) https://news.ycombinator.com/item?id=43697214

"ePass: Verifier-Cooperative Runtime Enforcement for eBPF" (2025-12) https://ebpf.foundation/epass-verifier-cooperative-runtime-e... .. https://news.ycombinator.com/item?id=46412121


I have no insight into the Asahi project, but the LKML link goes to an email from James Calligeros containing code written by Hector Martin and Sven Peter. The code may have been written a long time ago.


That's an email from James Calligeros. All this patch says is that the author is Hector Martin (and Sven Peter). The code could have been written a long time ago.


Where I'm from, it probably would not be stolen by anyone.


How?


Verizon unlimited plans will be about that after taxes and fees for two lines.

Add in phones being financed and you’re easily over $200/mo direct with a carrier.


It's trivial to implement an async runtime in the kernel. The kernel's workqueue is already essentially a runtime.


I was about to take offence at the use of “trivial” in this context. But then I noticed your handle, lol. You have the license to say that, thanks for your contributions!


It never made it into upstream Linux, but there is already a sample implementation that Wedson wrote in 2022: https://github.com/Rust-for-Linux/linux/pull/798


Won't that be an eager runtime though? Breaking Rust's assumption that futures do nothing until polled? Unless you don't submit it to the queue until the poll call, I guess


It won't be different from Tokio. When you pass a future to tokio::spawn, that will also eagerly execute the future right away.


> IIRC Alice from the tokio team also suggested there hasn't been much interest in pushing through these difficulties more recently, as the current performance is "good enough".

Well, I think there is interest, but mostly for file IO.

For file IO, the situation is pretty simple. We already have to implement that using spawn_blocking, and spawn_blocking has the exact same buffer challenges as io_uring does, so translating file IO to io_uring is not that tricky.

On the other hand, I don't think tokio::net's existing APIs will support io_uring. Or at least they won't support the buffer-based io_uring APIs; there is no reason they can't register for readiness through io_uring.


This covers probably 90% of the usefulness of io_uring for non-niche applications. Its original purpose was doing buffered async file IO without a bunch of caveats that make it effectively useless. The biggest speed up I’ve found with it is ‘stat’ing large sets of files in the VFS cache. It can literally be 50x faster at that, since you can do 1000 files with a single systemcall and the data you need from the kernel is all in memory.

High throughput network usecases that don’t need/want AF_XDP or DPDK can get most of the speedup with ‘sendmmsg/recvmmsg’ and segmentation offload.


For TCP streams syscall overhead isn't a big issue really, you can easily transfer large chunks of data in each write(). If you have TCP segmentation offload available you'll have no serious issues pushing 100gbit/s. Also if you are sending static content don't forget sendfile().

UDP is a whole another kettle of fish, get's very complicated to go above 10gbit/s or so. This is a big part of why QUIC really struggles to scale well for fat pipes [1]. sendmmsg/recvmmsg + UDP GRO/GSO will probably get you to ~30gbit/s but beyond that is a real headache. The issue is that UDP is not stream focused so you're making a ton of little writes and the kernel networking stack as of today does a pretty bad job with these workloads.

FWIW even the fastest QUIC implementations cap out at <10gbit/s today [2].

Had a good fight writing a ~20gbit userspace UDP VPN recently. Ended up having to bypass the kernels networking stack using AF_XDP [3].

I'm available for hire btw, if you've got an interesting networking project feel free to reach out.

1. https://arxiv.org/abs/2310.09423

2. https://microsoft.github.io/msquic/

3. https://github.com/apoxy-dev/icx/blob/main/tunnel/tunnel.go


Yeah all agreed - the only addendum I’d add is for cases where you can’t use large buffers because you don’t have the data (e.g. realtime data streams or very short request/reply cycles). These end up having the same problems, but are not soluble by TCP or UDP segmentation offloads. This is where reduced syscall overhead (or even better kernel bypass) really shines for networking.


I have a hard time believing that google is serving YouTube over QUIC/HTTP3 at 10Gbit/s, or even 30Gbit/s.


These are per-connection bottlenecks, largely due to implementation choices in the Linux network stack. Even with vanilla Linux networking, vertical scale can get the aggregate bandwidth as high as you want if you don’t need 10G per connection (which YouTube doesn’t), as long as you have enough CPU cores and NIC queues.

Another thing to consider: Google’s load balancers are all bespoke SDN and they almost certainly speak HTTP1/2 between the load balancers and the application servers. So Linux network stack constraints are probably not relevant for the YouTube frontend serving HTTP3 at all.


I'm quite careful to tightly control the dependencies of Tokio. All dependencies are under control by members of the Tokio team or others that I trust.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: