[go: up one dir, main page]

|
|
Log in / Subscribe / Register

Wayland - Beyond X (The H)

Wayland - Beyond X (The H)

Posted Feb 14, 2012 14:14 UTC (Tue) by mjthayer (guest, #39183)
In reply to: Wayland - Beyond X (The H) by renox
Parent article: Wayland - Beyond X (The H)

> Sorry nix but you seem to forget that the compositor see only "buffers" (images) or your application, so the compositor wont ever be able to do many optimisations that a toolkit or an application can do for remote access.

Is a buffer really just an image? I thought it was a handle, and farnz mentioned proxying EGL and OpenGL command streams above [1]. Which should take care of the font case quite nicely.

[1] http://lwn.net/Articles/481313/


to post comments

Wayland - Beyond X (The H)

Posted Feb 14, 2012 16:01 UTC (Tue) by farnz (subscriber, #17727) [Link] (13 responses)

The Wayland compositor to Wayland client buffer is just a handle representing an image that the hardware already knows about. Wayland clients are where any network remoting will take place; for example, you could have a Wayland client that handles translating between Wayland and a serialised representation of GDK 3's events and commands. That serialised representation can then be carried over (for example) an SSH session to the Wayland proxy client, or to a Win32 proxy client that then uses native rendering to display the application.

This pushes the rendering to the Wayland end of the link - it deserialises the protocol the toolkit has chosen to do its rendering, and converts that to suitable drawing commands. Wayland, after all, has no built-in rendering of its own.

Wayland - Beyond X (The H)

Posted Feb 16, 2012 10:28 UTC (Thu) by mjthayer (guest, #39183) [Link] (12 responses)

> The Wayland compositor to Wayland client buffer is just a handle representing an image that the hardware already knows about. Wayland clients are where any network remoting will take place [...]

Let's see if I have got a better idea now (after having looked at the Wayland architecture document[1] as well). It seems to me that to have an application render remotely one would need two different parallel connections: a remote OpenGL ES/EGL connection for the client to render over (is there such a thing currently?) and a connection from the client to the remote compositor to tell it about buffer updates. (And obviously the logical way to render fonts in such a set-up is to upload the glyphs to the remote display once and render them into your buffers using OpenGL ES commands.) Was that a bit closer?

Out of interest, my current understanding also suggests that the compositor and the server should be a single application. Is that right?

[1] http://wayland.freedesktop.org/architecture.html

Wayland - Beyond X (The H)

Posted Feb 16, 2012 10:47 UTC (Thu) by mjthayer (guest, #39183) [Link]

> Out of interest, my current understanding also suggests that the compositor and the server should be a single application. Is that right?

Taking my answer from "FOSDEM: The Wayland display server" in this week's LWN[1]:

"The code is divided in two parts: Wayland is the protocol and IPC mechanism, while Weston is the reference implementation of the compositor (and thus also the display server)."

So yes.

[1] http://lwn.net/Articles/481490/

Wayland - Beyond X (The H)

Posted Feb 16, 2012 11:06 UTC (Thu) by farnz (subscriber, #17727) [Link] (10 responses)

I'm going to try and answer you in reverse order - please bear with me.

Wayland is currently two things:

  1. A protocol for communication between a compositing display server and local applications.
  2. A library that implements that protocol, so that compositor implementers (e.g. Weston, Compiz, KWin) can concentrate on the hard stuff.

The compositor is the display server in Wayland's world - it will pass input events to applications, and accepts images from them for display.

My personal expectation is that an application that's rendering remotely will be split into two parts; one part (which I've been calling the rendering proxy) will run on the same machine as the Wayland compositor, and maintain two connections:

  1. Wayland protocol to the local compositor.
  2. TCP, SSH stream (to let SSH handle the security stuff for you) or other suitable socket type to the remote application, talking a private protocol.

The remote application will have one open socket, to the rendering proxy, and will serialise its rendering onto the network, and get events back from the proxy.

OpenGL ES has no serialisation form at all at the moment - it's an API that the client application calls. It shouldn't, however, be hard to write a rendering proxy that talks to a fake Wayland "compositor" and EGL library on the application machine, to forward all rendering across the network; there's just a lot of gruntwork there.

Wayland - Beyond X (The H)

Posted Feb 16, 2012 18:11 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

>OpenGL ES has no serialisation form at all at the moment - it's an API that the client application calls. It shouldn't, however, be hard to write a rendering proxy that talks to a fake Wayland "compositor" and EGL library on the application machine, to forward all rendering across the network; there's just a lot of gruntwork there.

You can use Gallium3D for this. That's how SVGA (VMWare's 3D acceleration driver for guests) work.

But the amount of traffic is going to make it impractical.

Wayland - Beyond X (The H)

Posted Feb 16, 2012 20:02 UTC (Thu) by mjthayer (guest, #39183) [Link] (5 responses)

> You can use Gallium3D for this. That's how SVGA (VMWare's 3D acceleration driver for guests) work.

To my knowledge (I have partly studied its programming documentation), SVGA emulates a graphics card with 3D capabilities, though with a much simpler register interface than an equivalent physical card. Gallium3D is a framework for writing (mainly DRI2) graphics drivers which VMWare use because the people who developed it are now their 3D engineering team. Gallium3D has been used for what you describe (see article on Phoronix [1]), though that only helps if you have a Gallium3D driver on the other side, and I don't think that Gallium3D presents any sort of stable API which might cause problems. I would have thought that directly proxying OpenGL ES might be a better idea (it has been specially developed for a small footprint as far as I can see), though Gallium3D might be useful for translating other APIs (like OpenGL + GLX) to OpenGL ES.

[1] http://www.phoronix.com/scan.php?page=news_item&px=OD...

Wayland - Beyond X (The H)

Posted Feb 16, 2012 23:43 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

>Gallium3D has been used for what you describe (see article on Phoronix [1]), though that only helps if you have a Gallium3D driver on the other side

There's no need for Gallium3D driver on the host side. Indeed 3D guest acceleration in VMWare works fine with NVidia blob or even DirectX on Windows.

>I would have thought that directly proxying OpenGL ES might be a better idea (it has been specially developed for a small footprint as far as I can see), though Gallium3D might be useful for translating other APIs (like OpenGL + GLX) to OpenGL ES.

Direct proxying has some problems. It's way too expensive for one thing, try to run APITrace ( http://zrusin.blogspot.com/2011/04/apitrace.html ) and see it for yourself.

Wayland - Beyond X (The H)

Posted Feb 17, 2012 12:52 UTC (Fri) by mjthayer (guest, #39183) [Link] (3 responses)

> There's no need for Gallium3D driver on the host side. Indeed 3D guest acceleration in VMWare works fine with NVidia blob or even DirectX on Windows.

Right, but as I said VMWare's 3D pipeline isn't intimately tied into Gallium3D. They presumably used Gallium3D to develop the drivers because that was what they knew well, given that they (the ex-Tungsten Graphics team) had spent the last several years developing it. In theory it would work just as well without, though it may be (I haven't looked at the register interface that closely) that their virtual hardware is designed to be a good match for Gallium3D. Which wouldn't be very surprising, as that was designed to be a good match for current physical hardware.

> Direct proxying has some problems. It's way too expensive for one thing, try to run APITrace ( http://zrusin.blogspot.com/2011/04/apitrace.html ) and see it for yourself.

Does that go for OpenGL ES 2 as well? I thought it was OpenGL stripped to the minimum that makes sense for modern graphics cards (mainly shader stuff). I haven't looked at Gallium3D that closely yet, but my feeling was that its "API" isn't all that different to OpenGL ES 2. Except that OpenGL 2 is a fixed interface whereas Gallium3D is still rather blurry and changing. If you know more than me I am very interested to know.

Wayland - Beyond X (The H)

Posted Feb 17, 2012 16:40 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

>Right, but as I said VMWare's 3D pipeline isn't intimately tied into Gallium3D.
But it is. Even their Windows guest 3D driver is based on Gallium3D.

>They presumably used Gallium3D to develop the drivers because that was what they knew well, given that they (the ex-Tungsten Graphics team) had spent the last several years developing it.
Well, there's a reason they bought Tungsten Graphics in the first place.

>Does that go for OpenGL ES 2 as well? I thought it was OpenGL stripped to the minimum that makes sense for modern graphics cards (mainly shader stuff).
Yes, unfortunately. OpenGL ES2 is just as big for real cases.

>I haven't looked at Gallium3D that closely yet, but my feeling was that its "API" isn't all that different to OpenGL ES 2.

Gallium3D is not really an API, it's more of a good intermediate model for other things. Sort of like LLVM is a nice layer for other stuff.

Wayland - Beyond X (The H)

Posted Feb 17, 2012 17:02 UTC (Fri) by mjthayer (guest, #39183) [Link] (1 responses)

>>Right, but as I said VMWare's 3D pipeline isn't intimately tied into Gallium3D.
>But it is. Even their Windows guest 3D driver is based on Gallium3D.

But surely this is because the purpose of Gallium3D is modularising graphics driver code and letting you reuse as much code as possible in different drivers? So that the approach would have been just as valuable if they were targeting a traditional graphics card rather than a virtual pass-through device?

>>I haven't looked at Gallium3D that closely yet, but my feeling was that its "API" isn't all that different to OpenGL ES 2.
>Gallium3D is not really an API, it's more of a good intermediate model for other things.

Well Gallium3D's interface on the hardware side if you like. That is one reason I wrote "API" in quotes.

Wayland - Beyond X (The H)

Posted Feb 18, 2012 4:06 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

>But surely this is because the purpose of Gallium3D is modularising graphics driver code and letting you reuse as much code as possible in different drivers? So that the approach would have been just as valuable if they were targeting a traditional graphics card rather than a virtual pass-through device?

There are no "traditional graphics cards" anymore...

Besides, what are you going to target? What if you target NVidia cards and your host has an ATI card?

So you have to build a translation layer somehow. You can do this on API level (like VirtualBox guest addition do) but with shaders and OpenCL it's a doomed proposition. Gallium3D allows to build a virtual abstract graphics card and use usual mechanisms like OpenGL or DirectX to work with it.

>Well Gallium3D's interface on the hardware side if you like. That is one reason I wrote "API" in quotes.
You can talk to hardware without Gallium3D (indeed, Intel's 3D drivers do not use it). Gallium3D is more of a user-space abstraction to translate OpenGL/OpenCL/vdpau into a stream of abstract commands (TGSI command stream) which then can be translated into graphic card's native command format.

Wayland - Beyond X (The H)

Posted Feb 17, 2012 2:09 UTC (Fri) by obi (guest, #5784) [Link] (2 responses)

does that mean you'd have to write a "rendering proxy" for every rendering API that could possibly be used? ie. cairo, opengl, whatever obscure graphics library a future Wayland client might decide to use?

Wayland - Beyond X (The H)

Posted Feb 17, 2012 5:02 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

No, obscure old libraries are unlikely to be ported to Wayland and would just run inside a hosted X-server.

Alternatively, they can fall back to a generic VNC-like protocol.

Wayland - Beyond X (The H)

Posted Feb 17, 2012 12:56 UTC (Fri) by mjthayer (guest, #39183) [Link]

I thought that these days most graphics/GPU/GPGPU stuff could be expressed using buffers and shaders anyway, so that you could encapsulate it in OpenGL ES 2 if you wanted. Similar to what Gallium3D is doing, to bring it into this thread too.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds