[go: up one dir, main page]

|
|
Log in / Subscribe / Register

There's some irony here...

There's some irony here...

Posted Feb 13, 2012 22:28 UTC (Mon) by oldtomas (guest, #72579)
Parent article: Wayland - Beyond X (The H)

Reading this (at the top of page 3)

Networking applications can be replaced with remoting tools such as RDP or VNC or are better performed in a browser.

And then, a couple of sentences further

Linux is famously scalable between devices but as Keith Packard points out: "[...] developers designing these systems are more likely to resent X for its complexity, for its memory and CPU footprint [...]"

At the moment my firefox is 123m RES vs 48m RES for X (and that's with a very moderate Firefox usage).

It feels strange to watch clients gravitating more and more to the bloated so-called "thin client" (I mean: HTTP as transport? XML as data serialization? [OK, the very smart ones have discovered JSON for that] Seriously?) and on the other hand hearing complaints about the inefficiency of the X server.

Just sayin'


to post comments

There's some irony here...

Posted Feb 13, 2012 22:44 UTC (Mon) by dlang (guest, #313) [Link] (46 responses)

yes many apps do turn up the eye candy to the point where sending a video feed is just as efficient.

but that's not all apps.

the sort of apps that work best as a video feed aren't ones that you would normally want to remote in any case.

this seems to be throwing out the baby with the bathwater.

There's some irony here...

Posted Feb 13, 2012 23:54 UTC (Mon) by Kit (guest, #55925) [Link] (45 responses)

The Wayland protocol wouldn't necessarily require virtually streaming a video to the remote side.

Weston (the reference implementation, equivalent of Xorg, as opposed to the protocol Wayland, equivalent of X11) could be replaced (or just extended) in instances where one wants to display the application on a remote system with a compositor that leverage various techniques to reduce the bandwidth consumption. It could use many of the techniques that NX uses (compression and caching), so it would likely end up with similar performance. It shouldn't be too hard to beat plain Xorg running more modern apps, since they more and more operate in manners that isn't ideal for the network transparency of X11.

There's some irony here...

Posted Feb 14, 2012 0:02 UTC (Tue) by dlang (guest, #313) [Link] (44 responses)

sending a series of screenshots to the remote side is roughly the equivalent of sending a video, but without the compression advantages that the streaming video formats provide.

There's some irony here...

Posted Feb 14, 2012 0:37 UTC (Tue) by Kit (guest, #55925) [Link] (42 responses)

Sending a series of screenshots is exactly what I meant that it didn't have to do.

Since the compositor is notified whenever the application changes what it has drawn, the compositor could then just do a comparison of the last sent frame and send only the delta, which under normal conditions would greatly reduce the bandwidth needed. Throw in some intelligent caching (like what NX or RDP does- although it would be harder to be as intelligent based on my understanding of the protocol), and the bandwidth used could potentially be lowered by even more.

I would imagine a 'worst case' situations would be smooth-scrolling in a large window (like a browser), but that would also be a case that X handles similarly poorly.

I'm glad that the debate isn't over "is network transparency even possible" anymore, but to "how well can the performance of the network transparency be".

There's some irony here...

Posted Feb 14, 2012 0:54 UTC (Tue) by dlang (guest, #313) [Link] (4 responses)

there was never a question of if it is possible, the question is if the Wayland developers would bother to do so, or would declare such capabilities 'obsolete' and not needed in the modern world.

Frankly, I expect that in the near future network transparancy will start to be more used as people have devices with high resolution, but low CPU and memory (i.e. tablets and phones with HD displays, connected to projectors for example) where it will be desirable to run the app on a good server and remote the display and controls the the user.

This may not be practical over the Internet, but on a local (even local wireless) network this should work very well.

There's some irony here...

Posted Feb 14, 2012 0:58 UTC (Tue) by daglwn (guest, #65432) [Link] (1 responses)

> Frankly, I expect that in the near future network transparancy will start
> to be more used as people have devices with high resolution, but low CPU
> and memory (i.e. tablets and phones with HD displays, connected to
> projectors for example) where it will be desirable to run the app on a
> good server and remote the display and controls the the user.

I share this vision. Someone ought to be able to run a media server in the home and watch videos, etc. on handheld devices without requiring gobs of processor. There will never be enough processing power on embedded devices to keep up with what users will demand of their displays.

I expect it will even be done over the internet someday.

There's some irony here...

Posted Feb 14, 2012 1:05 UTC (Tue) by dlang (guest, #313) [Link]

videos are a special case where hardware implementations of the common algorithms can be built that allow low-power devices to keep up with the decompression load.

but that still leaves a huge number of other applications

There's some irony here...

Posted Feb 14, 2012 22:21 UTC (Tue) by dlang (guest, #313) [Link] (1 responses)

by the way, one place that we are seeing people talk about what use cases that should be trivial if you are using a display protocol that's got transparent network features is where people are talking about using their HD TV as a display running content off of other devices (sliding a window from a tablet to a TV and similar)

with X and xpra for example, this is should be a trivial thing to do, instead you see it as the latest 'gee wiz' feature of closed proprietary systems that can only remote specific types of content.

There's some irony here...

Posted Feb 14, 2012 22:42 UTC (Tue) by khim (subscriber, #9252) [Link]

with X and xpra for example, this is should be a trivial thing to do, instead you see it as the latest 'gee wiz' feature of closed proprietary systems that can only remote specific types of content.

The most useful example includes watching video (you watch video on the tablet, then drag it to HDTV if you like it) or turning the tablet off when you've dragged "content" to HDTV. Or just simple change of controls when you drag the window (the controls suitable for tablet are removed and remote-friendly controls are installed). Sure it's easy to create 90% of the whole solution with X and xpra, but I doubt people will be content with it. And to reach 100% solution you'll need help from application side anyway.

There's some irony here...

Posted Feb 14, 2012 0:56 UTC (Tue) by daglwn (guest, #65432) [Link] (10 responses)

> I'm glad that the debate isn't over "is network transparency even
> possible" anymore,

That was never really the question. The question has been and continues to be, "will we bother to implement network transparency?"

It seems so far the only answer ever give is, "no."

dlang is right. It IS throwing the baby out with the bathwater and it IS removing necessary features and disrupting common workflows.

"Not now" vs "No"

Posted Feb 14, 2012 16:33 UTC (Tue) by CChittleborough (subscriber, #60775) [Link] (9 responses)

The question has been and continues to be, "will we bother to implement network transparency?" It seems so far the only answer ever given is, "no."
Actually, Kristian Høgsberg, Adam "Ajax" Jackson and others have all written about ways to provide network transparency in the future. For the present, the Wayland/Weston developers are concentrating on getting the protocol right and providing a first implementation.

Once this is done, they plan to produce a production-quality X-server-as-a-Wayland-client, AFAICT. (They've already demo'd X-on-Wayland, but ISTR that those were not production-quality.)

In the longer term, we already have proof-of-concept implementations of network-transparency in the form of VNC, Spice, Xpra, etc. These use various approaches; the "Remote Wayland" project would need to select an approach, design a network protocol and write some non-trivial software, which will take some time, but success is basically guaranteed if vendors provide enough resources.

"Not now" vs "No"

Posted Feb 14, 2012 17:35 UTC (Tue) by daglwn (guest, #65432) [Link] (8 responses)

> Wayland/Weston developers are concentrating on getting the protocol right

How can they be sure the protocol is right if they don't consider network transparency from the start?

I'm fine with alternate solutions as long as they perform as well or better than remote X and are as easy to use (ssh -X). I'm doubtful that will happen if it's not designed in from the start.

"Not now" vs "No"

Posted Feb 14, 2012 20:59 UTC (Tue) by wmf (guest, #33791) [Link] (1 responses)

They have considered it; they just haven't implemented it. (There is a concern that nobody will provide funding and it will never get implemented.)

"Not now" vs "No"

Posted Feb 15, 2012 1:32 UTC (Wed) by daglwn (guest, #65432) [Link]

That's exactly the problem.

Network transparency will need a second protocol

Posted Feb 15, 2012 9:14 UTC (Wed) by CChittleborough (subscriber, #60775) [Link] (5 responses)

I should have made myself clear; sorry. In the sentence daglwn quoted, I meant the non-networked protocol for use between apps and Wayland servers running on the same system. That protocol involves sharing graphics buffers via the kernel, so it can never be network-transparent. The devs have released 1.0, so they must think they've got that protocol about right.

Network transparency will require a second protocol which sends some kind of image deltas from the app to the server. Choosing how to do those deltas will take some time, but should pose no insurmountable challenges.

Network transparency will need a second protocol

Posted Feb 17, 2012 10:09 UTC (Fri) by jezuch (subscriber, #52988) [Link] (4 responses)

> Network transparency will require a second protocol which sends some kind of image deltas from the app to the server. Choosing how to do those deltas will take some time, but should pose no insurmountable challenges.

My impression was that the Wayland protocol is already all about deltas: applications update the buffer and tell Wayland what changed (i.e. what it the delta). Pushing those via network seems trivial. Is my impression wrong?

Network transparency will need a second protocol

Posted Feb 17, 2012 12:24 UTC (Fri) by khim (subscriber, #9252) [Link] (1 responses)

Devil is in details. Some trivial (and fast in local case!) effects (think fade-ins when you try to pull list on Android beyond it's limit) can generate literally tons of update messages. And this is exactly the type of effects which Wayland is suposed to make sensible!

Network transparency will need a second protocol

Posted Feb 24, 2012 16:17 UTC (Fri) by wookey (guest, #5501) [Link]

'literally tons'.

I never realised messages had so much mass. How come the handset doesn't get really heavy after a while - does the excess weight get sent to the base-station posts where it can disspate harmlessly?

Local Wayland protocol does not use deltas

Posted Feb 18, 2012 11:25 UTC (Sat) by CChittleborough (subscriber, #60775) [Link] (1 responses)

The Wayland protocol does not use deltas. The server (=compositor) maintains a graphics buffer and gets the graphics hardware to display it. When a client wishes to change the display (eg., because of user input), it renders into its own graphics buffer (=window) and then tells the server (=compositor) about the damage region(s). The server can then update the visible graphics buffer. The aim is that the visible buffer is always what you would get if you composited the client buffers according to window position etc, but the server only processes the parts of client buffers that have changed.

Local Wayland protocol does not use deltas

Posted Feb 20, 2012 11:24 UTC (Mon) by etienne (guest, #25256) [Link]

Is that the case that so much eye-candy has been added to the desktop (xterm with transparent background,...) that remote desktop cannot handle that amount of display primitives, and the system has to deal with bitmap only, sampled after the visual effect are finished?
It seems that, on a very high latency network, we may want simple display primitives, and use font, graphic, buttons, mouse shape, background locally stored (when their UUID is available locally)...

There's some irony here...

Posted Feb 14, 2012 1:09 UTC (Tue) by dlang (guest, #313) [Link] (14 responses)

something as simple as moving a window around will produce a huge delta.

Also, you can only use the GPU on a box if you limit yourself to one display per box, if you have several remote displays going on, sharing the GPU becomes a limiting factor.

being able to leverage remote GPUs for the display would be a very good thing, but unless you design it in to the protocol early, you are very unlikely to be able to retrofit it and have it work sanely.

There's some irony here...

Posted Feb 14, 2012 1:55 UTC (Tue) by Kit (guest, #55925) [Link] (13 responses)

> something as simple as moving a window around will produce a huge delta.

Only if you're following the VNC "whole-desktop" model, and would likely require extra work.

The logical way (at least IMO), would be to treat all the applications as distinct objects, with their own damage events. The computer you're sitting at would take all the windows its getting from the remote system, and then composite them on to the screen. Just like how XComposite made it so applications wouldn't have to redraw themselves whenever a window was moved above it, this would enable the compositor running across the internet from you to not care if you were moving the windows around the screen like a madman.

Then there would be no delta from moving a window around (well, unless you're talking about a sub-window in an MDI application...).

> being able to leverage remote GPUs for the display would be a very
> good thing, but unless you design it in to the protocol early, you
> are very unlikely to be able to retrofit it and have it work sanely.

Since applications render into an off-screen buffer, that is then passed to the Compositor (Weston, in the case of the current one), which then paints it to the screen. At least at a high level, this should be quite conducive to using remote GPUs (well, assuming 'remote' is the system the application is running on, not the system you're sitting at- beyond whatever the local compositor decides to do when it's doing the final screen painting).

There's some irony here...

Posted Feb 14, 2012 2:44 UTC (Tue) by dlang (guest, #313) [Link] (5 responses)

>> something as simple as moving a window around will produce a huge delta.

> Only if you're following the VNC "whole-desktop" model, and would likely require extra work.

remember that the Wayland people are the ones saying that the solution to network transparency is VNC.

There's some irony here...

Posted Feb 14, 2012 5:49 UTC (Tue) by Per_Bothner (subscriber, #7375) [Link] (4 responses)

remember that the Wayland people are the ones saying that the solution to network transparency is VNC.

No, they're saying a solution is something like VNC. Using VNC is useful because it supports multi-platform remoting, but a protocol tuned for Wayland would obviously be more efficient for Wayland-to-Wayland remoting. But designing and implementing the protocol is not a priority until the local case is stable - as long as they keep the issue in mind, which they seem to be doing.

There's some irony here...

Posted Feb 14, 2012 6:02 UTC (Tue) by raven667 (subscriber, #5198) [Link] (3 responses)

Apparently it is more fun to make strong statements based on selective misreading rather than have an actual discussion. Argument by vehemency. Personally i am surprised no one has mentioned SPICE as a possible candidate for Weyland remoting although maybe the NX folks will come out with something neat. VNC is pretty much guaranteed to be supported in any case. Also it's not like X11 will magically disappear overnight, all the current toolkits will probably continue to work for the foreseeable future and there is no problem running X11 on Weyland, much like how X11 runs on other systems

There's some irony here...

Posted Feb 14, 2012 6:30 UTC (Tue) by dlang (guest, #313) [Link] (2 responses)

the concern is not about the difficulty in running X11 apps on a Wayland server. That works today (plus if it didn't work, Wayland would be pretty hard to bootstrap)

the concern is getting future killer app Y that was built with a Wayland graphics library to run on a system that uses X11 for it's display. This will probably start to be a problem a few months after Fedora and/or Ubuntu ship with Wayland as the default display instead of X11 (not that the killer app will need anything that Wayland provides, it's just that it will be build using the new 'cool and trendy' graphics library.

or alternatively, make it so that you can use Wayland in the places where people currently use X11 network transparency.

There are a lot of people writing infrastructure code for Linux that don't seem to have any Unix experience. This doesn't have to be a bad thing, but when these people dismiss existing functionality as "nobody could ever want that", it then becomes a problem. Especially if this work goes in to a major distro.

There's some irony here...

Posted Feb 14, 2012 7:03 UTC (Tue) by raven667 (subscriber, #5198) [Link]

I dunno, I would still expect apps to use toolkits such as QT or GTK+ which can output to several different kinds of graphics drivers, including X11. I suppose if someone did write a "killer app" which didn't use a toolkit and was directly speaking the Weyland protocol and you didn't want to run it over VNC and a high performance remote display protocol hadn't yet been implemented then that could be a problem. That's a fair number of "ifs" though.

There's some irony here...

Posted Feb 14, 2012 9:20 UTC (Tue) by daniels (subscriber, #16193) [Link]

There are a lot of people writing infrastructure code for Linux that don't seem to have any Unix experience.

As far as I can tell these days, True UNIX Experience is limited to those who exclusively comment on LWN, Reddit and Hacker News, rather than those who write code.

There's some irony here...

Posted Feb 14, 2012 8:56 UTC (Tue) by marm (guest, #53705) [Link] (6 responses)

> Only if you're following the VNC "whole-desktop" model, and would likely require extra work.

Well, and what about scrolling the contents inside a window?

There's some irony here...

Posted Feb 14, 2012 9:47 UTC (Tue) by alankila (guest, #47141) [Link] (5 responses)

This keeps on being brought up.

While I don't know if it will ever get done, nothing at all would prevent an application from supplying the display system a rendering hint that says: "in my window texture, at rectangular region (x1, y1) to (x2, y2), the contents are shifted up by z pixels". It would do this at the same time when it pushes an updated texture for the compositor.

Now suppose remoting is done through a dumb transmit-pixel-images kind of protocol. A hint like this would allow the compositor to send this event to the remote display followed by the z rows of data which appeared at either top or bottom, thus achieving what must be the optimum efficiency.

These events could be generated by the relevant toolkits when they can prove that the proposed optimization is valid. Things like fixed backgrounds in scrollable areas would break this kind of optimization, although ways would exist to get around even that (instead of working with the single fully rendered window texture, work with the individual textures that comprise the UI elements, so a textarea is one texture at specific coordinates laid over a background texture, etc.).

There's some irony here...

Posted Feb 14, 2012 10:30 UTC (Tue) by marm (guest, #53705) [Link]

That's a neat idea. If it will ever get done, as you say... I agree that X11 has grown to an unsustainable state, but I still have a sad feeling that Wayland is motivated much more by eye candy than by usability. But let us see.

There's some irony here...

Posted Feb 14, 2012 21:56 UTC (Tue) by HelloWorld (guest, #56129) [Link] (3 responses)

> While I don't know if it will ever get done, nothing at all would prevent an application from supplying the display system a rendering hint that says: "in my window texture, at rectangular region (x1, y1) to (x2, y2), the contents are shifted up by z pixels". It would do this at the same time when it pushes an updated texture for the compositor.
This is precisely the sort of stuff that doesn't belong in Wayland. If you add things like this, it is all too tempting to add things like an optimization for lines being drawn and one for characters being drawn and oops, you just invented a new rendering protocol.
That isn't necessarily a bad idea, but it should be done separately.

There's some irony here...

Posted Feb 15, 2012 19:15 UTC (Wed) by alankila (guest, #47141) [Link] (2 responses)

Well, whatever way the remoting gets done. Not saying it has to be in Wayland's protocol, just saying that reasonably efficient scrolling is possible given that somebody who knows how the display changed will tell the information, rather than loses it and then requires some expensive rediscovery process to happen...

There's some irony here...

Posted Feb 15, 2012 19:20 UTC (Wed) by HelloWorld (guest, #56129) [Link] (1 responses)

Well, this is exactly what Høgsberg is saying: network transparency should be implemented in the toolkit, because the toolkit has access to the relevant information.

There's some irony here...

Posted Feb 15, 2012 19:24 UTC (Wed) by dlang (guest, #313) [Link]

but if it's only implemented in each toolkit, then you will end up with a nightmare of different remoteing protocols, and the resulting compatibility issues that this raises.

and what happens if one of these toolkits doesn't support the display mode on your platform (since each toolkit will need to write a sender and receiver for it's protocol)

There's some irony here...

Posted Feb 14, 2012 1:21 UTC (Tue) by nix (subscriber, #2304) [Link] (10 responses)

Quite. I would expect that with minimal observation of the damaged regions (or perhaps with a little more information from the client) the 'network compositor' could detect regions which are repeatedly reused and transmit them once to the far end, following which it only needs to send 'put that here'. (This is similar to what X does with GlyphSets, only automated.)

If the clients can communicate some indication as to which elements are likely to be repeated (as they do with GlyphSets) this changes from a relatively expensive pattern-recognition job to something utterly trivial -- and, for textual display at least, the bandwidth savings would be immense. (Who complains about the network bandwidth usage of client-side fonts? Nobody, that's who, because they don't need much because of precisely this trick.)

There's some irony here...

Posted Feb 14, 2012 10:06 UTC (Tue) by alankila (guest, #47141) [Link] (9 responses)

If Wayland will be properly optimized for local usage, the clients render directly into GPU memory which precludes having a fast access to the pixels of the window. (CPU reading from GPU memory is very slow.) I'm personally hoping that this will be the case, either through applications writing the pixels to GPU texture directly, or applications using GPU's hardware to draw the primitives, or both.

In theory, window textures can be snapshotted/copied to main memory and any analysis can be performed there, but I'd personally rather see application level remoting because for pretty much all open source software that exists, it only needs to be done twice: to GTK+ and Qt, and doesn't even depend on Wayland being used (you could write and prototype this code right away, no need to wait for Wayland deployment). Of course, X remoting existing today means that we'll be probably be using it until it's taken away from us...

There's some irony here...

Posted Feb 14, 2012 11:33 UTC (Tue) by nix (subscriber, #2304) [Link] (2 responses)

Hm, that's true, the toolkit can easily remember all this stuff, including enough to implement a lot of useful optimizations. There are still more than two toolkits, but only two seem to have been written with any expectation of multiple-backend support, so, yes, remoting support in Gtk and Qt is probably sufficient.

Of course this means duplicating security-critical networking and authentication code twice, rather than having it centralized in one place. (One place which still needs to run as root! Why? I thought the X-as-root thing was on the verge of solution, or is everyone running chasing Wayland now too hard to fix this security weal on the side of Unix?)

There's some irony here...

Posted Feb 14, 2012 16:00 UTC (Tue) by farnz (subscriber, #17727) [Link] (1 responses)

I'm sure I'm missing something, but why do you need to duplicate the security-critical authentication code? I would expect something more along the lines of "remote application spawns SSH process that tunnels the toolkit-specific protocol to the machine running the Wayland compositor, where a serialised-GDK Wayland client is started just for this app", where SSH handles the authentication for you, and the GDK Wayland client runs under your credentials.

There's some irony here...

Posted Feb 14, 2012 23:06 UTC (Tue) by nix (subscriber, #2304) [Link]

Yeah, that would work, I suppose. SSH tunnelling is cheap enough now that I only notice it when I'm trying to tunnel full-motion video across it (*that* is likely to always be too slow to be useful, since CPUs are getting no faster and most crypto algorithms are not terribly parallelizable).

There's some irony here...

Posted Feb 14, 2012 13:08 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

>CPU reading from GPU memory is very slow.
It's not THAT slow. It's certainly possible to read GPU surfaces 5-10 times each second (more than enough for remoting) without noticeable load.

There's some irony here...

Posted Feb 14, 2012 18:49 UTC (Tue) by wmf (guest, #33791) [Link] (4 responses)

Since the window is in GPU memory already, it may be possible to use the GPU to diff/compress it instead of the CPU. For example, to detect scrolling you'd use well-studied motion estimation algorithms.

There's some irony here...

Posted Feb 14, 2012 23:41 UTC (Tue) by nix (subscriber, #2304) [Link] (3 responses)

Aren't most motion estimation algorithms really rather CPU-expensive? I thought Wayland was meant to be more efficient than X, not orders of magnitude less.

There's some irony here...

Posted Feb 15, 2012 5:59 UTC (Wed) by khim (subscriber, #9252) [Link] (2 responses)

I think it belongs to make the simple things simple and hard things possible. X fails horribly at that: local access requires a lot of useless dances. Wayland improves situation a lot of a common case, but yes, it makes rare cases more expensive.

There's some irony here...

Posted Feb 15, 2012 15:44 UTC (Wed) by nix (subscriber, #2304) [Link] (1 responses)

Rare cases like 'scrolling the screen'. Sorry, not a rare case unless you define all remote access as 'rare', in which case I'd like to point out this virtualization and 'cloud' stuff that is all the rage these days.

There's some irony here...

Posted Feb 15, 2012 19:20 UTC (Wed) by alankila (guest, #47141) [Link]

The toolkit protocol must support efficient scrolling. If it can for instance just send the textarea's text content over the wire and leave it all to local client then great, problem solved.

If it's something more complex like images that don't even exist until they get inside the viewport, then of course local client must send request for updated image pixmap and then that must get generated and sent, but still scrolling for the part that is already known locally should be possible.

I think it's vital that in a good design we avoid the "let's treat app as black box and send data like it was full motion video"... It's great if it's an option for some dummy app/toolkit that can't do this (some types of games?) but for, say, GTK+ app I'm convinced that much better solution exists.

There's some irony here...

Posted Feb 14, 2012 0:59 UTC (Tue) by jmalcolm (guest, #8876) [Link]

I have not dug into the guts of Wayland but is this a fair comparison?

The Windows on my desktop, including the one I am typing in now, change very little. I would say that 95%+ of my browser window has not changed in the last twenty seconds even though I have been actively typing (and I am a pretty fast typist).

The bandwidth requirements to stream a video of my browser window would be immense a the quality and frame rate that I expect. I would assume that Wayland would simply provide snapshots of the pixels (or regions of pixels) that have actually changed and only when they change.

If I type at sixty words per minute, a refresh rate of one Hertz (or twice that to be safe) would be sufficient to keep up. I am sure though that the screen is actually refreshing at sixty Hertz or faster even on my shitty laptop.

Now, if I was playing a full-screen, rapid motion, OpenGT game or something then the video codec comparison might be fairer. Then again, what video codec can do real-time compression and decompression where frames are not guaranteed to be substantially similar? In this extreme case, just sending "snapshots" of each image seems like not too bad an option. The "snapshot" images can certainly be compressed as well of course.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds