[go: up one dir, main page]

|
|
Log in / Subscribe / Register

Wayland - Beyond X (The H)

Wayland - Beyond X (The H)

Posted Feb 15, 2012 0:53 UTC (Wed) by Cyberax (✭ supporter ✭, #52523)
In reply to: Wayland - Beyond X (The H) by dlang
Parent article: Wayland - Beyond X (The H)

Things like leftovers from a previous program on solid background of a ncurses program, incorrect position of cursor in an input field, ^H instead of actual <backspace>, etc.

A lot of these are simple annoyances, but they're there.

Please, don't tell me that all text-based UIs work without problems for you. I simply won't believe you.


to post comments

Wayland - Beyond X (The H)

Posted Feb 15, 2012 1:03 UTC (Wed) by dlang (guest, #313) [Link] (26 responses)

I would question how many of these are due to any network transparency vs how many are due to buggy terminal emulators.

I've seen the exact same types of problems on local consoles, no X or network transparency in sight.

I think you're blaming the wrong component for the problem

Wayland - Beyond X (The H)

Posted Feb 15, 2012 2:50 UTC (Wed) by nybble41 (subscriber, #55106) [Link] (12 responses)

It's not that the terminal emulators are buggy, though I'm sure some of them are. It also has nothing to do with network transparency, which should be obvious given that very few other network-transparent protocols share these particular flaws.

The problem stems from the fact that multiple terminal clients share a single serialized connection to the terminal (real or emulated), with quite a bit of shared state and little or no inter-client coordination. To solve it you would need to ensure that separate clients have their own state and can't interfere with other clients' connections to the terminal--just as each X client has its own independent connection to the X server.

Windows-style terminals have similar issues when multiple programs try to control the same terminal window in conflicting ways (e.g. if one program attempts to scroll the buffer while another is drawing position-dependent widgets). They just don't share nearly as much state, mostly due to their extremely limited feature set. They are also much more likely to be restricted to a single program at a time, background processes being practically non-existent in the Windows command-line environment.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 5:54 UTC (Wed) by khim (subscriber, #9252) [Link] (11 responses)

It also has nothing to do with network transparency, which should be obvious given that very few other network-transparent protocols share these particular flaws.

Very few network-transparent protocols are used to mix and match unrelated programs. You can not just shove some graph-plotting program in your GNOME or KDE panel. You need specialized programs which used specialized interfaces for that.

It's trivial and efficient to do that with Wayland. Is it useful for all the programs out there? Probably not (even in console world you only have limited number of programs which do that). But for some of them it's real big plus.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 16:11 UTC (Wed) by nybble41 (subscriber, #55106) [Link] (10 responses)

Not that I disagree with anything you just said--but how does it relate to the quote you were replying to? Were you perhaps implying that the problems dlang was erroneously attributing to network transparency are instead inherent to network-transparent "protocols ... used to mix and match unrelated programs"? I don't see any basis for that assertion, either.

I have no problem with Wayland or its shared-memory architecture as a purely *local* means of compositing GUIs onto a real or virtual desktop. As I understand it, that is what Wayland was designed for, so there should be no argument there. Network transparency should be implemented at the rendering layer, and as has been stated repeatedly, Wayland does not define a rendering API. Personally, I envision something like a serialized implementation of the Gallium3D APIs taking the place of AIGLX for hardware-accelerated remote rendering, with individual applications being none the wiser. (Naturally, this would work best if they don't assume zero latency and infinite bandwidth between the CPU and GPU, but those are factors in the local case as well.)

The main concern, apparently shared by many others, is the applications may come to take Wayland's low-latency shared-memory design for granted, limiting remote access to slow, inefficient video streams--but we face that problem with X clients already.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 16:35 UTC (Wed) by nybble41 (subscriber, #55106) [Link] (9 responses)

> the problems dlang was erroneously attributing to network transparency

Sorry, but that should be Cyberax, not dlang. It seems I got the positions backward... Anyway, the point is the same. These problems aren't because the terminal protocol is serialized (network transparent), but rather because all interaction with the terminal is squashed into a single uncoordinated stream. And complaining about _input processing_ compatibility issues such as backspace appearing as ^H is just silly; the same thing happens on Windows terminals if you press unrecognized control sequences. The only difference there is that Windows only has one terminal type, which always defines backspace as ^H; Unix, with its richer history, is designed to be interoperable with many different terminal protocols.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 21:33 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

>Sorry, but that should be Cyberax, not dlang. It seems I got the positions backward... Anyway, the point is the same. These problems aren't because the terminal protocol is serialized (network transparent), but rather because all interaction with the terminal is squashed into a single uncoordinated stream.

Yes, it's certainly possible to design a good "remote rendering" protocol for something as simple as text. Yet it hasn't been done, and we all suffer because of it.

And the analogy with X11 is direct - Unix terminals are outgrowths of real wired terminals and so they had the "network transparency" from the start causing them a lot of problems.

>And complaining about _input processing_ compatibility issues such as backspace appearing as ^H is just silly; the same thing happens on Windows terminals if you press unrecognized control sequences.
Wrong. Windows console layer knows _nothing_ about control characters, it provides a "video surface" to applications on which they can do whatever they please.

Serial console emulators working on top of the console layer, of course, can (and do) misbehave.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 21:37 UTC (Wed) by khim (subscriber, #9252) [Link] (7 responses)

These problems aren't because the terminal protocol is serialized (network transparent), but rather because all interaction with the terminal is squashed into a single uncoordinated stream.

It's coordinated. You have escape sequences and about bazillion libraries which try to generate/interpret/parse them. The whole mess includes about ten thousand times more code (no, I'm not joking!) then simple buffer-to-buffer memcpy approach which worked perfectly fine on all computers starting from tiny ones like MSX (which had something like 32KB of RAM thus it quite literally had no power to even try to cope with all these libraries).

If you never used MSX then recall MS DOS. Things like Sidekick worked just fine there and newer created mess on screen. And the same is true on OS/2 and Windows.

And complaining about _input processing_ compatibility issues such as backspace appearing as ^H is just silly; the same thing happens on Windows terminals if you press unrecognized control sequences.

Only if program decides to show it explicitly. On Linux because of network transparency layer you get mess on your screen when you combine TUI programs in non-trivial form. You can only be sure that everything will work fine if you use simple programs which don't try to format output. Things like “"ls --color=auto” are not needed on DOS/Windows/etc terminal (they are needed if you want to store output in a text file and then process it as a text file, not as a terminal buffer, but that's different story).

The only difference there is that Windows only has one terminal type, which always defines backspace as ^H; Unix, with its richer history, is designed to be interoperable with many different terminal protocols.

Nope. That's because you don't need complex and convoluted logic to process data. If key comes to your program then it's Key Event or Mouse Event. If you write text on the screen then you write it on screen.

Easy, simple, reliable. Not flexible, that's for sure (it's not easy to get “screen output” of one program and interpret it as “keyboard input” for another program). Basically in DOS/Windows/etc you have solution which solvers perfectly most popular cases and is almost totally useless for less common cases while in *nix you have solution which does everything... poorly.

Wayland vs X is the same story: you can use Wayland to do things which are very easy, simple and power-efficient (things like complex scrolling effects which are demanded by Joe Average who've seen iOS or Android) but other things (like network transparency) are hard to implement (if they are possible at all).

What will you prefer? It depends on what you want to do, really.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 22:08 UTC (Wed) by nybble41 (subscriber, #55106) [Link] (6 responses)

Seriously, we get it--though I'm not sure you do. You hate escape sequences and think it makes more sense to give TUI program direct low-level access to the terminal--write this string at this location in this style, poll for input events, etc.

Well, perhaps that does make more sense, at least for TUI applications. (For normal command-line programs its obviously a step backward, since there is no easy way to recover the original text once it's been formatted into a terminal buffer.) However, it could be done with a trivial network-transparent protocol just as easily as with a shared memory buffer. The important point is having low-level access to a very thin terminal, not how you communicate with it. Shared memory is faster due to being zero-copy (though not necessarily by a large margin), while message-passing has fewer locking issues and works better when latency is high. You can always convert one form into the other, though going from shared memory to message-passing tends to much less efficient since the original structure is not preserved.

P.S. TUIs are almost entirely an anachronism by this point. The systems which make regular use of text terminals use them mainly for CLIs, not TUIs, and many of these CLI programs are designed for serialized I/O. If you want low-level access, including direct access to the screen and input events, write a GUI. It really doesn't make sense to design applications around raw VGA-style text buffers only to run them in a terminal emulator running inside X.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 22:17 UTC (Wed) by dlang (guest, #313) [Link] (1 responses)

by the way, you can run into similar formatting problems with direct screen access if you make the mistake of outputting a character to the bottom right of the screen and let the screen scroll on you.

if you just do direct I/O to the screen, character by character you don't have this problem, but you can run into the problem that a badly written program starts scribbling on whatever is in memory right after the screen.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 22:25 UTC (Wed) by khim (subscriber, #9252) [Link]

by the way, you can run into similar formatting problems with direct screen access if you make the mistake of outputting a character to the bottom right of the screen and let the screen scroll on you.

How? Direct screen access just puts your character there? To scroll the buffer you need completely another function.

if you just do direct I/O to the screen, character by character you don't have this problem, but you can run into the problem that a badly written program starts scribbling on whatever is in memory right after the screen.

You have this problem in any protocol with shared memory. That's what memory protection is for :-)

Wayland - Beyond X (The H)

Posted Feb 15, 2012 22:32 UTC (Wed) by khim (subscriber, #9252) [Link] (3 responses)

Seriously, we get it--though I'm not sure you do.

I'm not sure you get my point.

You hate escape sequences and think it makes more sense to give TUI program direct low-level access to the terminal--write this string at this location in this style, poll for input events, etc.

Not just for TUI. GUI, too.

However, it could be done with a trivial network-transparent protocol just as easily as with a shared memory buffer.

This is not proven. Most network-transparent protocols lead for the mess both in TUI and GUI case. My point is: if we can not even handle simple case (TUI) then what hope is there for more complex case (GUI)? If TUI suffers from the introduction of “effective network-transparency layer” then why don't you expect the similar effect on GUI? And if said transparency layer is actually damaging then are we even sure it's good idea to make it non-optional component?

Wayland - Beyond X (The H)

Posted Feb 15, 2012 23:16 UTC (Wed) by nybble41 (subscriber, #55106) [Link] (2 responses)

> Not just for TUI. GUI, too. ... Most network-transparent protocols lead for the mess both in TUI and GUI case.

I'm not seeing that at all. Your examples all come from the legacy serial terminal and its escape codes, and do not apply to GUIs. Elsewhere you did state that "When you see garbage in your console and when you see overlapping text in your [remotely served] Firefox it's the exact same failure", but (a) I've never seen that kind of failure, and (b) even presuming it exists, it doesn't really seem like the same thing at all.

> If TUI suffers from the introduction of “effective network-transparency layer” ...

But it doesn't. TUI suffers from being forced to work through a protocol which was not "designed" so much as "evolved", in the very early stages of computer science, before we learned to design proper protocols, constantly merging features from similar-but-incompatible protocols and being extended and forked for various special cases. The implementation *is* the standard, and yet there is no single reference implementation. Given all of that, it's something of a wonder that it works at all.

I doubt a direct-access protocol would work as well--consider trying to write such a TUI when there is no standard memory layout, color/style encoding, character encoding, input API, etc. That interface is simpler only because it is standardized. If you ignored the issue of legacy compatibility, and simply defined what you want the terminal to do (perhaps based on the Windows terminal API), you can trivially provide a well-defined way to serialize those commands into a stream of messages which will achieve the same effect without any of the problems you've described.

> ... if we can not even handle simple case (TUI) then what hope is there for more complex case (GUI)? ... why don't you expect the similar effect on GUI?

Basically, because the GUI case (both for X11, and even more so for whatever remoting protocol is used with Wayland) is not nearly so encumbered by the need to remain compatible with legacy clients. There are actual independent standards involved. X11 has some legacy clients to deal with, but the original protocol was still much better defined than "serial communications with ANSI/VT100/xterm escape codes". The Wayland remoting protocol is in an even better position to adopt a rational design.

Wayland - Beyond X (The H)

Posted Feb 16, 2012 1:10 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

>I'm not seeing that at all. Your examples all come from the legacy serial terminal and its escape codes, and do not apply to GUIs.

They do. The whole X-Windows uses exactly the same architecture - series of 'escape codes' over a serial line (Unix socket or TCP pipe). And the same argument apply - X works poorly both in remote AND in local cases.

Wayland - Beyond X (The H)

Posted Feb 16, 2012 15:29 UTC (Thu) by nybble41 (subscriber, #55106) [Link]

> The whole X-Windows uses exactly the same architecture - series of 'escape codes' over a serial line (Unix socket or TCP pipe).

That is a gross oversimplification. Yes, they both use serial protocols. Most things do nowadays. However, X11 doesn't use "escape codes"; the protocol is designed around structured messages with clear boundaries. More importantly, each X client has its own state and an independent connection to the X server, so they can access the screen and input without conflict. The problem you described are an artifact of the particular set of protocols used for serial terminals, not serialized protocols in general.

> And the same argument apply - X works poorly both in remote AND in local cases.

To say it works "poorly" in either case is another exaggeration. For what it was originally designed for, it works just fine. Technology has moved on, of course, and the prevalence of client-side rendering means that we now have rather different requirements for efficient rendering than we used to. The core rendering protocol is mostly unused, clients tend to provide pre-rendered buffers, the addition of compositing support means an extra round-trip, etc. However, I don't believe I've _ever_ seen the kinds of problems in X which you've described for TUI applications.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 5:46 UTC (Wed) by khim (subscriber, #9252) [Link] (12 responses)

I've seen the exact same types of problems on local consoles, no X or network transparency in sight.

If you talk *nix local consoles then it's the same problem. Even local console pays the price for network transparency for it, too, works with server⇋client model. The only way to see this in DOS/Windows world is to use some kind of ancient MS-DOS program which requires ANSI.sys (for the same reason). Modern TUIs (which use direct-to-buffer rendering) never experience this problem.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 6:46 UTC (Wed) by dlang (guest, #313) [Link] (11 responses)

no, when you use the actual console (not a terminal window on X on the local machine) there is no network or network transparency layer involved.

For that matter, I've seen the same types of corruptions on serial terminals. again no network transparency layer involved, just basic vt-100 terminal codes

the problem is that the programmer makes assumptions about what is where on the screen and doesn't check that things actually fit there. As soon as things start going out of the expected area, all sorts of problems can happen.

^H instead of backspace happens when the terminal emulation doesn't match what the system is expecting you to use. Again this has nothing to do with X or network transparency and everything to do with simple misconfigurations of the terminal settings.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 15:58 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

There is. Local consoles work through the serial ttys, just as remote consoles do.

>For that matter, I've seen the same types of corruptions on serial terminals. again no network transparency layer involved, just basic vt-100 terminal codes

Exactly. It happens on Unix terminals all the time, but it NEVER happens on Windows.

>^H instead of backspace happens when the terminal emulation doesn't match what the system is expecting you to use.
Yep, because there is no "direct rendering" mode for consoles.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 20:57 UTC (Wed) by khim (subscriber, #9252) [Link] (9 responses)

For that matter, I've seen the same types of corruptions on serial terminals. again no network transparency layer involved

Contradiction detected. Serial terminal protocol is a network transparency layer!

the problem is that the programmer makes assumptions about what is where on the screen and doesn't check that things actually fit there.

The problem is there are no screen in this model. Screen exist "somewhere out there", program does not have [a direct] access to it thus it's forced to work via vt-100 terminal codes layer. This makes trivial things basically impossible, or very hard. The things which worked perfectly on tiny microcomputers (with direct memory access - think MSX) are still broken on the Linux systems.

^H instead of backspace happens when the terminal emulation doesn't match what the system is expecting you to use. Again this has nothing to do with X or network transparency and everything to do with simple misconfigurations of the terminal settings.

Sorry, but no. ^H happens when network transparency layer misbehaves. If you write characters to buffer directly then such mismatch can not ever happen. You may see some strange symbols (for example if you are using encoding with linedrawing characters in different places), but straight lines can never become tangled mess (as it often happens in Linux) because in the programming model with direct access to buffer there are no easy way to create such mess.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 21:27 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

>You may see some strange symbols (for example if you are using encoding with linedrawing characters in different places)
Ah. GOST vs. alternative GOST encoding on old printers and video cards. I think I'm going to spend next 2 hours in nostalgia.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 21:38 UTC (Wed) by nybble41 (subscriber, #55106) [Link] (6 responses)

> Screen exist "somewhere out there", program does not have [a direct] access to it thus it's forced to work via vt-100 terminal codes layer.

This should be a clue. You're complaining about artifacts of the VT-100 terminal protocol, not anything inherent in network transparency. I don't think anyone would argue that VT-100 is an ideal arrangement, but its flaws are not applicable to network-transparent protocols in general.

Imagine for a moment, if you will, a serialized terminal protocol consisting of a series of atomic (row, column, style, character) quads. This protocol would be perfectly network-transparent, yet would also be indistinguishable from the direct access you're advocating--the quads and direct writes to the buffer being perfectly interchangeable.

You can even do this with the existing VT-100 protocol provided you have exclusive access to the terminal and don't mind wasting some bandwidth constantly resupplying position and style information the terminal already has.

More generally, *any* interprocess communication protocol can be transformed into an equivalent "network-transparent" version based on the exchange of serialized messages; the point of designing for transparency up front is to make the serialization more efficient by preserving the original, higher-level representation, e.g. rendering commands rather than raw video, and by providing for high latency and being conservative of bandwidth, both of which tend to be useful properties in the local case anyway.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 22:19 UTC (Wed) by khim (subscriber, #9252) [Link] (5 responses)

You're complaining about artifacts of the VT-100 terminal protocol, not anything inherent in network transparency.

Actually that's both. These are artifacts of VT-100 protocol which are typical for a systems with “efficient network transparency”.

I don't think anyone would argue that VT-100 is an ideal arrangement, but its flaws are not applicable to network-transparent protocols in general.

Sadly they are. When you see garbage in your console and when you see overlapping text in your [remotely served] Firefox it's the exact same failure: bugs and inconsistency in unnecessarily complex (from local use POV) protocol which is inconsistently/buggily implemented.

Imagine for a moment, if you will, a serialized terminal protocol consisting of a series of atomic (row, column, style, character) quads. This protocol would be perfectly network-transparent, yet would also be indistinguishable from the direct access you're advocating--the quads and direct writes to the buffer being perfectly interchangeable.

Well, sure. This is VNC-like remote display protocol advocated by Wayland developers.

You can even do this with the existing VT-100 protocol provided you have exclusive access to the terminal and don't mind wasting some bandwidth constantly resupplying position and style information the terminal already has.

You can not. You can write one such program, but it'll be exercise in futility because all other programs expect to use "network-transparent" VT100-based protocol.

More generally, *any* interprocess communication protocol can be transformed into an equivalent "network-transparent" version based on the exchange of serialized messages; the point of designing for transparency up front is to make the serialization more efficient by preserving the original, higher-level representation, e.g. rendering commands rather than raw video,

Bingo! And this is exactly where you introduce multitude of bugs and inconsistencies. It's trivial to implement Windows-console or Wayland protocol: it's very simple, and very robust API. When you add high-level primitives you add complexity which leads to problems. For example if you offload text drawing to the server then you lose the ability to match your pre-rendered picture and your drawn-on-screen text (because you must use exact type of antialiasing in both cases). And the more complexity you add to your protocol the bugger is the outcome.

and by providing for high latency and being conservative of bandwidth, both of which tend to be useful properties in the local case anyway.

Nope. GPUs can very efficiently blend buffers. There are no need to bother with all these optimization unless your program requires them. Developer can decide if they want to create perfect rendering of the font (remember this article which showed example in which each line has a 1/10th pixel shift, so that, in the run of 30 lines it gradually (gradually!) accumulates 3 extra pixels?)…, or go with fast and loose implementation.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 23:41 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

>You can not. You can write one such program, but it'll be exercise in futility because all other programs expect to use "network-transparent" VT100-based protocol.

Well, you can. By writing terminal emulation working within another terminal emulation. Sort of like 'screen' does.

But it kinda proves our point :)

Wayland - Beyond X (The H)

Posted Feb 16, 2012 0:41 UTC (Thu) by cmccabe (guest, #60281) [Link] (3 responses)

What is your point exactly? That you like direct-mapped text consoles better than network-transparent ones?

That would take us back to the days of the Commodore 64, when remote administration was impossible. Of course, networking was primitive and clunky, so maybe that was a blessing in disguise. And of course the C64 NEVER experienced screen corruption. Oh wait, no, it did-- due to two things: flaky hardware and buggy software. The same things that cause screen corruption and other bugs today.

It's possible that HTML5 and similar technologies will make X11's network transparency obsolete. That might be a valid argument to make. Arguing that we should not implement network transparency because it hurts your poor little brain is not. It's a feature that people want-- deal with it. And yes, people would rather have something with a bug or two that does what they want than something simple and featureless which is only useful as a doorstop (like the Windows console.)

Wayland - Beyond X (The H)

Posted Feb 16, 2012 1:07 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

Our point is that local direct-rendering protocols work better than remote ones, given the same amount of work.

It's possible to design a good remote graphics protocol, but it's VERY non-trivial to make it perfect. It's so non-trivial that even simple text-based rendering is still imperfect after 35 years of development.

>And yes, people would rather have something with a bug or two that does what they want than something simple and featureless which is only useful as a doorstop

Would you use a car that spontaneously ejects a driver (even on a highway) sometimes but also has a nice set of skis for downhill car skiing?

I don't think so. And so do most of people. Besides, I'm seeing TeamViewer and Gotomeeting being used several orders of magnitude more than X remoting.

Wayland - Beyond X (The H)

Posted Feb 20, 2012 0:23 UTC (Mon) by cmccabe (guest, #60281) [Link] (1 responses)

Text-based rendering is imperfect on Linux for the same reason that HTML5 is a mess. Whenever you start talking about network protocols, you're talking about communication between different systems. It's a good guess that the vendors of many of those systems are competing with each other and have no incentive to make things work smoothly. In fact, they often have the opposite incentive.

> Would you use a car that spontaneously ejects a driver
> (even on a highway) sometimes but also has a nice set of
> skis for downhill car skiing?

There were 33,000 road fatalities in the United States in 2009. Cars are dangerous. But apparently people are willing to take the risk of driving them anyway. It's about risk versus reward.

> Besides, I'm seeing TeamViewer and Gotomeeting being used several
> orders of magnitude more than X remoting.

Gotomeeting is Windows software, so I don't see how it's relevant here. You might have stayed on topic and mentioned VNC or HTML5 as X11 replacements.

I wish the Wayland advocates would be more specific about what they are offering in exchange for network transparency. So far, all I've seen is "it's like X, but more efficient," which isn't exactly an inspiring battle cry. But I haven't had time to read all of the documents, just the short summaries that I come across.

Wayland - Beyond X (The H)

Posted Feb 20, 2012 12:58 UTC (Mon) by dgm (subscriber, #49227) [Link]

> I wish the Wayland advocates would be more specific about what they are offering in exchange for network transparency.

What Wayland brings to the table is simplicity. It's much simpler that X11 _because_ it doesn't have to support network transparency. And that's not in exchange of network transparency, you can run X11 on top of Wayland. Put another way: by removing the network from the equation, and unifying the server, window manager and compositor, you get a design that is workable, and can serve as the base for a remote protocol, if that's what you need, but without the penalty imposed in the local clients.

Wayland - Beyond X (The H)

Posted Feb 17, 2012 3:14 UTC (Fri) by jschrod (subscriber, #1646) [Link]

^H happens when network transparency layer misbehaves.

I really thought you had something to tell.

*PLONK*


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds