You can subscribe to this list here.
| 2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(3) |
Jul
|
Aug
(7) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
(7) |
| 2006 |
Jan
(1) |
Feb
(2) |
Mar
(3) |
Apr
(3) |
May
(5) |
Jun
(1) |
Jul
|
Aug
(2) |
Sep
(4) |
Oct
(17) |
Nov
(18) |
Dec
(1) |
| 2007 |
Jan
|
Feb
|
Mar
(8) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(6) |
Dec
(1) |
| 2008 |
Jan
(17) |
Feb
(20) |
Mar
(8) |
Apr
(8) |
May
(10) |
Jun
(4) |
Jul
(5) |
Aug
(6) |
Sep
(9) |
Oct
(19) |
Nov
(4) |
Dec
(35) |
| 2009 |
Jan
(40) |
Feb
(16) |
Mar
(7) |
Apr
(6) |
May
|
Jun
(5) |
Jul
(5) |
Aug
(4) |
Sep
(1) |
Oct
(2) |
Nov
(15) |
Dec
(15) |
| 2010 |
Jan
(5) |
Feb
(20) |
Mar
(12) |
Apr
|
May
(2) |
Jun
(4) |
Jul
|
Aug
(11) |
Sep
(1) |
Oct
(1) |
Nov
(3) |
Dec
|
| 2011 |
Jan
(8) |
Feb
(19) |
Mar
|
Apr
(12) |
May
(7) |
Jun
(8) |
Jul
|
Aug
(1) |
Sep
(21) |
Oct
(7) |
Nov
(4) |
Dec
|
| 2012 |
Jan
(3) |
Feb
(25) |
Mar
(8) |
Apr
(10) |
May
|
Jun
(14) |
Jul
(5) |
Aug
(12) |
Sep
(3) |
Oct
(14) |
Nov
|
Dec
|
| 2013 |
Jan
(10) |
Feb
(4) |
Mar
(10) |
Apr
(14) |
May
(6) |
Jun
(13) |
Jul
(37) |
Aug
(20) |
Sep
(11) |
Oct
(1) |
Nov
(34) |
Dec
|
| 2014 |
Jan
(8) |
Feb
(26) |
Mar
(24) |
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
(28) |
Oct
(4) |
Nov
(4) |
Dec
(2) |
| 2015 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
(13) |
Jul
|
Aug
(3) |
Sep
(8) |
Oct
(11) |
Nov
(16) |
Dec
|
| 2016 |
Jan
|
Feb
(6) |
Mar
|
Apr
(9) |
May
(23) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
(7) |
May
(3) |
Jun
|
Jul
(3) |
Aug
|
Sep
(8) |
Oct
|
Nov
|
Dec
(3) |
| 2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2019 |
Jan
(4) |
Feb
|
Mar
(2) |
Apr
(6) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2020 |
Jan
|
Feb
|
Mar
|
Apr
(31) |
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2021 |
Jan
(2) |
Feb
(2) |
Mar
(5) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2022 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
|
8
|
9
|
10
|
11
(5) |
12
|
13
|
14
|
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
|
22
|
23
|
24
|
25
|
26
|
27
|
28
|
|
29
|
30
|
31
|
|
|
|
|
|
From: Stephen S. <rad...@gm...> - 2012-07-11 21:54:59
|
On Mon, Jun 4, 2012 at 4:45 AM, Camille Troillard <ca...@os...> wrote: >> Update: >> >> I should also mention that the send function has the same conventions as recv: you have to call it in a loop because you cannot assume that it will send all your data. While it might always work in your development environment, that's the kind of assumption that will bite you later. > > > > For data input (lo_server), you said that the incomplete message could be stored while we wait on lo_wait. If this is correct, I can try to implement something. Right now, I have no idea of how this could be implemented for message sending. I think the right approach is to wrap send() and recv() into a buffered approach at the byte level. Basically the liblo semantics need to be given an asynchronous API. For reading, we need to read all ready bytes into a buffer and dispatch as needed. That should be straightforward. For send(), we need to copy sending bytes to a buffer, and send as much as possible. Since lo_send() assumes a completely synchronous semantics, if not all the data is sent we need to loop on send() until it all goes. However, this can block programs, so it would be nice to provide an alternative lo_send_async() command which returns even if a message was not completely sent. It can return for example a boolean indicating whether the send was completed, indicating to the calling program that it should call lo_send_async() again in the near future. (There its parameters need to be optional!! It is not necessarily called for queueing up new messages, but also to continue sending old messages.) Note this also means copying the message data, or we could retain a pointer to the lo_message and simply specify that the user is not to free that memory until lo_send_async() returns 0. What do you think? It's a bit more than I have time for at the moment however. A loop around send() should do the trick for the synchronous API. If we're tricky about it, in the bidirectional case we could use select() to read any incoming data while waiting to write. > I think it is important to fix these problems so TCP support will be complete. Agreed! Steve |
|
From: Stephen S. <rad...@gm...> - 2012-07-11 21:44:58
|
On Mon, Jun 4, 2012 at 5:18 AM, Camille Troillard <ca...@os...> wrote: > Hi Steve, > > Sorry for the late answer. > > > On 30 avr. 2012, at 21:37, Stephen Sinclair wrote: > >> On Sat, Apr 28, 2012 at 9:42 AM, Camille Troillard >> <ca...@os...> wrote: >>> Hello, >>> >>> In message.c at line 942, lo_arg_pp_internal is called with the last argument (bigendian) set to 0. >>> This causes values to print incorrectly on little endian cpu based computers. >> >> That line is: >> >> lo_arg_pp_internal(m->types[i], d, 1); >> >> Do you mean "set to 1"? > > Yes, in the liblo trunk, lo_arg_pp_internal is called with the last argument being 1. > I believe it should be 0 (see attached diff) > > >> My understanding is that lo_arg_pp needs to be told whether the data >> has been converted from network order or not. So lo_arg_pp() calls it >> with bigendian=0, since the data has presumably already been >> deserialised. Whereas lo_message_pp calls it with bigendian=1, since >> inside the lo_message it points to serialised data? >> >> However, this seems to be not true, as you point out. Data pointed to >> by lo_message is always already converted to native order, so it >> should always be called with bigendian=0 even in lo_message_pp. >> Confusing. It would seem only useful when referring to serialised >> data. I guess I'll need to test on little and big endian machines to >> be sure. Fortunately I still have access to a ppc mac. > > Did you find something regarding this issue? > I'm sorry, I don't have access to a ppc mac ... Yes you're right, it fixes the problem on both ppc and x86 macs. Steve |
|
From: Stephen S. <rad...@gm...> - 2012-07-11 21:24:20
|
Hi Camille, Sorry, I have been meaning to get to the TCP stuff for over two months now. I didn't do a release in May as I said I would, and now it's July! On Mon, Jun 4, 2012 at 6:07 AM, Camille Troillard <ca...@os...> wrote: > Hello Stephen, > > I am resurrecting an old discussion. > To refresh your memory, the problem was the following: > > 1- using liblo, TCP client sends data to a TCP server. > 2- client stops sending data. > 3- the server is closed. > 4- now we try to open the server again on the same port. > 5- -> results in "error port in use" and the server can not be open anymore. > > > The server can be opened again only if we wait enough (about 1 minute), or this scenario happens: > > 1- using liblo, TCP client sends data to a TCP server. > 2- client stops sending data. > 3- the server is closed. > 3b- the client sends a message to the closed server -> result in an expected error. > 4- now we try to open the server again on the same port. > 5- -> the server is opened again. > > > A friend gave me what looks like a good solution: > Set the SO_REUSEADDR flag on the server socket. > So far, this works well for me and fixes a behavior I considered as a bug since a long time. To me these symptoms seem to indicate that the server is not properly closing the port, which is surprising because afaik there are close() and shutdown() calls in all the right places, but perhaps there is something missing. So I think the correct fix would be to make sure that errors are detected and handled properly, and the the shutdown procedure is correct. However, in the meantime I don't see any reason not to set this flag. Just wanted to check that it doesn't do weird things when you start multiple servers on the same port, but so-far-so-good. I modified example_server.c to use LO_TCP, verified that SO_REUSEADDR is set, and it still gives me "cannot find free port" when it is run twice. Anyways, patch applied. Steve |
|
From: Stephen S. <rad...@gm...> - 2012-07-11 20:59:57
|
On Wed, Jul 11, 2012 at 7:06 AM, Camille Troillard <ca...@os...> wrote: > Hi Stephen, > > I'd love to know your opinion regarding this suggestion, or see this patch > merged. > > By the way, may I make another suggestion ? I think it would be great to > see liblo on GitHub so everyone can fork and submit patches via GitHub's > collaborative user interface. The svn repository is already mirrored regularly in gitorious.org: http://gitorious.org/projects/liblo I do this because I use git-svn anyways, so every time I "git svn dcommit", I also push the new master to gitorious. One thing I like about this workflow is that it keeps the history linear, but it also makes merging more annoying, and destroys author information. And linear history is over-rated. I have a github account anyways so I would be willing to mirror the gitorious repository over there. On the other hand, now that sourceforge supports git, I don't see why we can't just switch to it there. You'd be free to send me pull requests via github and I can merge them and push them to github/gitorious and sourceforge, where SF would remain the official version. I'll look into SF's git resources when I get a chance. I do prefer git after all. The reason I originally chose gitorious over github was because it's open source AGPL and has better size limits, but since then I've become a pretty regular github user. The nice thing about git is that in the end it doesn't really matter where it's hosted. Steve |
|
From: Camille T. <ca...@os...> - 2012-07-11 11:21:27
|
Hi Stephen, I'd love to know your opinion regarding this suggestion, or see this patch merged. By the way, may I make another suggestion ? I think it would be great to see liblo on GitHub so everyone can fork and submit patches via GitHub's collaborative user interface. Best, Camille Begin forwarded message: > From: Camille Troillard <ca...@os...> > Date: 4 juin 2012 12:07:06 HAEC > To: liblo development list <lib...@li...> > Subject: [liblo] TCP and SO_REUSEADDR > Reply-To: liblo development list <lib...@li...> > > Hello Stephen, > > I am resurrecting an old discussion. > To refresh your memory, the problem was the following: > > 1- using liblo, TCP client sends data to a TCP server. > 2- client stops sending data. > 3- the server is closed. > 4- now we try to open the server again on the same port. > 5- -> results in "error port in use" and the server can not be open anymore. > > > The server can be opened again only if we wait enough (about 1 minute), or this scenario happens: > > 1- using liblo, TCP client sends data to a TCP server. > 2- client stops sending data. > 3- the server is closed. > 3b- the client sends a message to the closed server -> result in an expected error. > 4- now we try to open the server again on the same port. > 5- -> the server is opened again. > > > A friend gave me what looks like a good solution: > Set the SO_REUSEADDR flag on the server socket. > So far, this works well for me and fixes a behavior I considered as a bug since a long time. > > Here is some documentation about this: > >> I've come across : Beej's Guide to Network Programming >> >> http://beej.us/guide/bgnet/ >> >> which seems quite good. It's up to date, ie it covers IPv4 and IPv6, and emphasises writing code that accommodates either. >> >> Still working through it, but I came across a bit (at the bottom of page 19 in section 5.3) that may help re Osculator reporting a 'port in use' error after un-pausing. The author also mentions that the problem goes away after a minute or so, which matches what you said. > > > More specifically: > >>> Sometimes, you might notice, you try to rerun a server and bind() fails, claiming "Address already in use." What does that mean? Well, a little bit of a socket that was connected is still hanging around in the kernel, and it's hogging the port. You can either wait for it to clear (a minute or so), or add code to your program allowing it to reuse the port, like this: >>> >>> int yes=1; >>> //char yes='1'; // Solaris people use this >>> >>> // lose the pesky "Address already in use" error message >>> if (setsockopt(listener,SOL_SOCKET,SO_REUSEADDR,&yes,sizeof(int)) == -1) { >>> perror("setsockopt"); >>> exit(1); >>> } > > > > And the patch is attached... > > > Best, > Cam > > |