[go: up one dir, main page]

|
|
Log in / Subscribe / Register

Wayland - Beyond X (The H)

Wayland - Beyond X (The H)

Posted Feb 14, 2012 10:32 UTC (Tue) by renox (guest, #23785)
In reply to: Wayland - Beyond X (The H) by nix
Parent article: Wayland - Beyond X (The H)

Sorry nix but you seem to forget that the compositor see only "buffers" (images) or your application, so the compositor wont ever be able to do many optimisations that a toolkit or an application can do for remote access..

For example, one could imagine the display server would have a cache for the applications fonts managed by the client: the server gives the size of the cache at startup and then the client sends a texture of the most used glyphs, then most of the time the client wouldn't have to send again the image of the character.
How would you do this kind of optimisation in the compositor? You can't, what you can do only is compress the images..


to post comments

Wayland - Beyond X (The H)

Posted Feb 14, 2012 11:12 UTC (Tue) by etienne (guest, #25256) [Link] (6 responses)

> one could imagine the display server would have a cache for the applications fonts managed by the client

I sometime wonder how much data is exchanged in between client and server, where both the server and the client have identical data locally stored.
For instance, why send fonts over the network, when both client and server have a local copy of the same font file (maybe because both are standard installation of the same Linux distribution).

Wayland - Beyond X (The H)

Posted Feb 14, 2012 11:30 UTC (Tue) by nix (subscriber, #2304) [Link] (4 responses)

For instance, why send fonts over the network, when both client and server have a local copy of the same font file (maybe because both are standard installation of the same Linux distribution).
You have reinvented the X font server and core fonts. It turns out to be a pointless optimization: you gain the ability to remove one transfer of those glyphs of the font which are used (and only those glyphs) across the network. In exchange, you lose accurate client-side metrics, client-side compositing when needed (GlyphSets don't have to be composited on the server side, they just normally are); in current implementations the server has to pull the whole font in glyph by glyph so it's horribly slow with large Unicode fonts; you gain failure modes from the fonts claiming to be identical but not actually being pixel-identical, and validating that requires horrible hash-transfer handshakes...

Optimizing to speed up a transfer of a few hundred or a few thousand characters that occurs once in the life of a client is a waste of time. You're optimizing initialization! More important is that we not lose the ability to distinguish repeated units (like glyphs) and composite them on the server side... and judging from other comments here Wayland throws that baby out with the bathwater, leaving us with a protocol that must necessarily be far less bandwidth-efficient than X when displaying difficult images like a black-background xterm containing lots of text all in the same font. Because that trivial and common case is too hard to optimize! (boggle.)

Wayland - Beyond X (The H)

Posted Feb 14, 2012 16:48 UTC (Tue) by jonabbey (guest, #2736) [Link] (3 responses)

On the other hand, text on a solid background should be massively compressible, no? I suppose it would depend on the intelligence of the batching and delta protocol?

What bit-blitting for text rendering loses on a character-by-character basis it should presumably be able to gain in scrolling?

Wayland - Beyond X (The H)

Posted Feb 14, 2012 17:05 UTC (Tue) by renox (guest, #23785) [Link] (2 responses)

The thing is: you want to do very fast compression..

On a system point of view, it's very stupid to loose all this information provided by the application: converting text to an image and then trying to compress the images generated..

Wayland - Beyond X (The H)

Posted Feb 14, 2012 17:35 UTC (Tue) by gioele (subscriber, #61675) [Link] (1 responses)

> On a system point of view, it's very stupid to loose all this information provided by the application: converting text to an image and then trying to compress the images generated..

On the other hand if you do not send rasterized text as image, you need to send text + fonts + other bits of information. In the past this approach has been found as limited and replaced by the current client-side font handling.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 12:51 UTC (Wed) by renox (guest, #23785) [Link]

You misunderstood me: I didn't advocate the old X way to do this, the XRender way to do fonts is quite good (client side glyph generation and a cache in the server): flexible and efficient for network transparency.

What I was saying that with Wayland the compositor will just see an image of the Window, so from a networking point of view this is not very efficient..

"Compressing" a window would be much more efficient if the background and the text were handled separatedly.

Wayland - Beyond X (The H)

Posted Feb 14, 2012 11:37 UTC (Tue) by renox (guest, #23785) [Link]

Good remark, an interesting "corner case" optimisation but note that to be 100% sure that what the client and the server have is the same, you cannot trust filenames or version numbers: you'd need to checksum the content of the "supposedly shared" data..
Note that even when the client and the server have the same font files, there is a latency issue: the server doesn't know which font will be used, which means the first time a client refers to a font the server would need to access to a disk and disks are slow!

Wayland - Beyond X (The H)

Posted Feb 14, 2012 11:25 UTC (Tue) by nix (subscriber, #2304) [Link] (4 responses)

In that case the local protocol needs to grow. This optimization is crucial: it reduced the network hit of client-side fonts by many orders of magnitude at nearly zero cost. (Shoving huge bitmaps around on the local machine unnecessarily is going to hurt, too, with the memory hierarchy being what it is.)

Wayland - Beyond X (The H)

Posted Feb 14, 2012 13:57 UTC (Tue) by renox (guest, #23785) [Link]

>Shoving huge bitmaps around on the local machine unnecessarily is going to hurt, too, with the memory hierarchy being what it is

No: they won't "shove" them around in their "normal" configuration, the client application render itself to a buffer in the GPU's memory then a reference to this buffer is sent to the display server which compose them: everything stays in GPU's memory, very efficient for the local case.

Of course this only work well if you have good HW and good driver, otherwise indeed this won't be very efficient..

Wayland - Beyond X (The H)

Posted Feb 14, 2012 14:40 UTC (Tue) by renox (guest, #23785) [Link]

> In that case the local protocol needs to grow. This optimization is crucial: it reduced the network hit of client-side fonts by many orders of magnitude at nearly zero cost.

This is very unlikely to happen as one of the design goal of Wayland is to allow the application to render itself anyway it chooses and then to work with buffers..

Wayland - Beyond X (The H)

Posted Feb 14, 2012 16:45 UTC (Tue) by farnz (subscriber, #17727) [Link] (1 responses)

You might want to look at the Wayland protocol description - it's ~700 lines of XML, and reasonably comprehensible as-is.

A useful lie-to-children description of Wayland's protocol is that it's the bits of the X11 protocol that an application using XI2 for input and DRI2 for rendering will use.

Wayland - Beyond X (The H)

Posted Feb 15, 2012 15:07 UTC (Wed) by renox (guest, #23785) [Link]

What is interesting also is the PDF "The Wayland Display Server"(*), it's much more readable..
It doesn't list exactly the same objects as the XML files, though.

*: http://people.freedesktop.org/~krh/wayland.pdf

Wayland - Beyond X (The H)

Posted Feb 14, 2012 14:14 UTC (Tue) by mjthayer (guest, #39183) [Link] (14 responses)

> Sorry nix but you seem to forget that the compositor see only "buffers" (images) or your application, so the compositor wont ever be able to do many optimisations that a toolkit or an application can do for remote access.

Is a buffer really just an image? I thought it was a handle, and farnz mentioned proxying EGL and OpenGL command streams above [1]. Which should take care of the font case quite nicely.

[1] http://lwn.net/Articles/481313/

Wayland - Beyond X (The H)

Posted Feb 14, 2012 16:01 UTC (Tue) by farnz (subscriber, #17727) [Link] (13 responses)

The Wayland compositor to Wayland client buffer is just a handle representing an image that the hardware already knows about. Wayland clients are where any network remoting will take place; for example, you could have a Wayland client that handles translating between Wayland and a serialised representation of GDK 3's events and commands. That serialised representation can then be carried over (for example) an SSH session to the Wayland proxy client, or to a Win32 proxy client that then uses native rendering to display the application.

This pushes the rendering to the Wayland end of the link - it deserialises the protocol the toolkit has chosen to do its rendering, and converts that to suitable drawing commands. Wayland, after all, has no built-in rendering of its own.

Wayland - Beyond X (The H)

Posted Feb 16, 2012 10:28 UTC (Thu) by mjthayer (guest, #39183) [Link] (12 responses)

> The Wayland compositor to Wayland client buffer is just a handle representing an image that the hardware already knows about. Wayland clients are where any network remoting will take place [...]

Let's see if I have got a better idea now (after having looked at the Wayland architecture document[1] as well). It seems to me that to have an application render remotely one would need two different parallel connections: a remote OpenGL ES/EGL connection for the client to render over (is there such a thing currently?) and a connection from the client to the remote compositor to tell it about buffer updates. (And obviously the logical way to render fonts in such a set-up is to upload the glyphs to the remote display once and render them into your buffers using OpenGL ES commands.) Was that a bit closer?

Out of interest, my current understanding also suggests that the compositor and the server should be a single application. Is that right?

[1] http://wayland.freedesktop.org/architecture.html

Wayland - Beyond X (The H)

Posted Feb 16, 2012 10:47 UTC (Thu) by mjthayer (guest, #39183) [Link]

> Out of interest, my current understanding also suggests that the compositor and the server should be a single application. Is that right?

Taking my answer from "FOSDEM: The Wayland display server" in this week's LWN[1]:

"The code is divided in two parts: Wayland is the protocol and IPC mechanism, while Weston is the reference implementation of the compositor (and thus also the display server)."

So yes.

[1] http://lwn.net/Articles/481490/

Wayland - Beyond X (The H)

Posted Feb 16, 2012 11:06 UTC (Thu) by farnz (subscriber, #17727) [Link] (10 responses)

I'm going to try and answer you in reverse order - please bear with me.

Wayland is currently two things:

  1. A protocol for communication between a compositing display server and local applications.
  2. A library that implements that protocol, so that compositor implementers (e.g. Weston, Compiz, KWin) can concentrate on the hard stuff.

The compositor is the display server in Wayland's world - it will pass input events to applications, and accepts images from them for display.

My personal expectation is that an application that's rendering remotely will be split into two parts; one part (which I've been calling the rendering proxy) will run on the same machine as the Wayland compositor, and maintain two connections:

  1. Wayland protocol to the local compositor.
  2. TCP, SSH stream (to let SSH handle the security stuff for you) or other suitable socket type to the remote application, talking a private protocol.

The remote application will have one open socket, to the rendering proxy, and will serialise its rendering onto the network, and get events back from the proxy.

OpenGL ES has no serialisation form at all at the moment - it's an API that the client application calls. It shouldn't, however, be hard to write a rendering proxy that talks to a fake Wayland "compositor" and EGL library on the application machine, to forward all rendering across the network; there's just a lot of gruntwork there.

Wayland - Beyond X (The H)

Posted Feb 16, 2012 18:11 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

>OpenGL ES has no serialisation form at all at the moment - it's an API that the client application calls. It shouldn't, however, be hard to write a rendering proxy that talks to a fake Wayland "compositor" and EGL library on the application machine, to forward all rendering across the network; there's just a lot of gruntwork there.

You can use Gallium3D for this. That's how SVGA (VMWare's 3D acceleration driver for guests) work.

But the amount of traffic is going to make it impractical.

Wayland - Beyond X (The H)

Posted Feb 16, 2012 20:02 UTC (Thu) by mjthayer (guest, #39183) [Link] (5 responses)

> You can use Gallium3D for this. That's how SVGA (VMWare's 3D acceleration driver for guests) work.

To my knowledge (I have partly studied its programming documentation), SVGA emulates a graphics card with 3D capabilities, though with a much simpler register interface than an equivalent physical card. Gallium3D is a framework for writing (mainly DRI2) graphics drivers which VMWare use because the people who developed it are now their 3D engineering team. Gallium3D has been used for what you describe (see article on Phoronix [1]), though that only helps if you have a Gallium3D driver on the other side, and I don't think that Gallium3D presents any sort of stable API which might cause problems. I would have thought that directly proxying OpenGL ES might be a better idea (it has been specially developed for a small footprint as far as I can see), though Gallium3D might be useful for translating other APIs (like OpenGL + GLX) to OpenGL ES.

[1] http://www.phoronix.com/scan.php?page=news_item&px=OD...

Wayland - Beyond X (The H)

Posted Feb 16, 2012 23:43 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

>Gallium3D has been used for what you describe (see article on Phoronix [1]), though that only helps if you have a Gallium3D driver on the other side

There's no need for Gallium3D driver on the host side. Indeed 3D guest acceleration in VMWare works fine with NVidia blob or even DirectX on Windows.

>I would have thought that directly proxying OpenGL ES might be a better idea (it has been specially developed for a small footprint as far as I can see), though Gallium3D might be useful for translating other APIs (like OpenGL + GLX) to OpenGL ES.

Direct proxying has some problems. It's way too expensive for one thing, try to run APITrace ( http://zrusin.blogspot.com/2011/04/apitrace.html ) and see it for yourself.

Wayland - Beyond X (The H)

Posted Feb 17, 2012 12:52 UTC (Fri) by mjthayer (guest, #39183) [Link] (3 responses)

> There's no need for Gallium3D driver on the host side. Indeed 3D guest acceleration in VMWare works fine with NVidia blob or even DirectX on Windows.

Right, but as I said VMWare's 3D pipeline isn't intimately tied into Gallium3D. They presumably used Gallium3D to develop the drivers because that was what they knew well, given that they (the ex-Tungsten Graphics team) had spent the last several years developing it. In theory it would work just as well without, though it may be (I haven't looked at the register interface that closely) that their virtual hardware is designed to be a good match for Gallium3D. Which wouldn't be very surprising, as that was designed to be a good match for current physical hardware.

> Direct proxying has some problems. It's way too expensive for one thing, try to run APITrace ( http://zrusin.blogspot.com/2011/04/apitrace.html ) and see it for yourself.

Does that go for OpenGL ES 2 as well? I thought it was OpenGL stripped to the minimum that makes sense for modern graphics cards (mainly shader stuff). I haven't looked at Gallium3D that closely yet, but my feeling was that its "API" isn't all that different to OpenGL ES 2. Except that OpenGL 2 is a fixed interface whereas Gallium3D is still rather blurry and changing. If you know more than me I am very interested to know.

Wayland - Beyond X (The H)

Posted Feb 17, 2012 16:40 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

>Right, but as I said VMWare's 3D pipeline isn't intimately tied into Gallium3D.
But it is. Even their Windows guest 3D driver is based on Gallium3D.

>They presumably used Gallium3D to develop the drivers because that was what they knew well, given that they (the ex-Tungsten Graphics team) had spent the last several years developing it.
Well, there's a reason they bought Tungsten Graphics in the first place.

>Does that go for OpenGL ES 2 as well? I thought it was OpenGL stripped to the minimum that makes sense for modern graphics cards (mainly shader stuff).
Yes, unfortunately. OpenGL ES2 is just as big for real cases.

>I haven't looked at Gallium3D that closely yet, but my feeling was that its "API" isn't all that different to OpenGL ES 2.

Gallium3D is not really an API, it's more of a good intermediate model for other things. Sort of like LLVM is a nice layer for other stuff.

Wayland - Beyond X (The H)

Posted Feb 17, 2012 17:02 UTC (Fri) by mjthayer (guest, #39183) [Link] (1 responses)

>>Right, but as I said VMWare's 3D pipeline isn't intimately tied into Gallium3D.
>But it is. Even their Windows guest 3D driver is based on Gallium3D.

But surely this is because the purpose of Gallium3D is modularising graphics driver code and letting you reuse as much code as possible in different drivers? So that the approach would have been just as valuable if they were targeting a traditional graphics card rather than a virtual pass-through device?

>>I haven't looked at Gallium3D that closely yet, but my feeling was that its "API" isn't all that different to OpenGL ES 2.
>Gallium3D is not really an API, it's more of a good intermediate model for other things.

Well Gallium3D's interface on the hardware side if you like. That is one reason I wrote "API" in quotes.

Wayland - Beyond X (The H)

Posted Feb 18, 2012 4:06 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

>But surely this is because the purpose of Gallium3D is modularising graphics driver code and letting you reuse as much code as possible in different drivers? So that the approach would have been just as valuable if they were targeting a traditional graphics card rather than a virtual pass-through device?

There are no "traditional graphics cards" anymore...

Besides, what are you going to target? What if you target NVidia cards and your host has an ATI card?

So you have to build a translation layer somehow. You can do this on API level (like VirtualBox guest addition do) but with shaders and OpenCL it's a doomed proposition. Gallium3D allows to build a virtual abstract graphics card and use usual mechanisms like OpenGL or DirectX to work with it.

>Well Gallium3D's interface on the hardware side if you like. That is one reason I wrote "API" in quotes.
You can talk to hardware without Gallium3D (indeed, Intel's 3D drivers do not use it). Gallium3D is more of a user-space abstraction to translate OpenGL/OpenCL/vdpau into a stream of abstract commands (TGSI command stream) which then can be translated into graphic card's native command format.

Wayland - Beyond X (The H)

Posted Feb 17, 2012 2:09 UTC (Fri) by obi (guest, #5784) [Link] (2 responses)

does that mean you'd have to write a "rendering proxy" for every rendering API that could possibly be used? ie. cairo, opengl, whatever obscure graphics library a future Wayland client might decide to use?

Wayland - Beyond X (The H)

Posted Feb 17, 2012 5:02 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

No, obscure old libraries are unlikely to be ported to Wayland and would just run inside a hosted X-server.

Alternatively, they can fall back to a generic VNC-like protocol.

Wayland - Beyond X (The H)

Posted Feb 17, 2012 12:56 UTC (Fri) by mjthayer (guest, #39183) [Link]

I thought that these days most graphics/GPU/GPGPU stuff could be expressed using buffers and shaders anyway, so that you could encapsulate it in OpenGL ES 2 if you wanted. Similar to what Gallium3D is doing, to bring it into this thread too.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds