You can subscribe to this list here.
| 2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(259) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2002 |
Jan
(361) |
Feb
(71) |
Mar
(270) |
Apr
(164) |
May
(55) |
Jun
(218) |
Jul
(203) |
Aug
(146) |
Sep
(105) |
Oct
(70) |
Nov
(156) |
Dec
(223) |
| 2003 |
Jan
(229) |
Feb
(126) |
Mar
(461) |
Apr
(288) |
May
(203) |
Jun
(64) |
Jul
(97) |
Aug
(228) |
Sep
(384) |
Oct
(208) |
Nov
(88) |
Dec
(291) |
| 2004 |
Jan
(425) |
Feb
(382) |
Mar
(457) |
Apr
(300) |
May
(323) |
Jun
(326) |
Jul
(487) |
Aug
(458) |
Sep
(636) |
Oct
(429) |
Nov
(174) |
Dec
(288) |
| 2005 |
Jan
(242) |
Feb
(148) |
Mar
(146) |
Apr
(148) |
May
(200) |
Jun
(134) |
Jul
(120) |
Aug
(183) |
Sep
(163) |
Oct
(253) |
Nov
(248) |
Dec
(63) |
| 2006 |
Jan
(96) |
Feb
(65) |
Mar
(88) |
Apr
(172) |
May
(122) |
Jun
(111) |
Jul
(83) |
Aug
(210) |
Sep
(102) |
Oct
(37) |
Nov
(28) |
Dec
(41) |
| 2007 |
Jan
(82) |
Feb
(84) |
Mar
(218) |
Apr
(61) |
May
(66) |
Jun
(35) |
Jul
(55) |
Aug
(64) |
Sep
(20) |
Oct
(92) |
Nov
(420) |
Dec
(399) |
| 2008 |
Jan
(149) |
Feb
(72) |
Mar
(209) |
Apr
(155) |
May
(77) |
Jun
(150) |
Jul
(142) |
Aug
(99) |
Sep
(78) |
Oct
(98) |
Nov
(82) |
Dec
(25) |
| 2009 |
Jan
(38) |
Feb
(86) |
Mar
(129) |
Apr
(64) |
May
(106) |
Jun
(121) |
Jul
(149) |
Aug
(110) |
Sep
(74) |
Oct
(98) |
Nov
(83) |
Dec
(46) |
| 2010 |
Jan
(53) |
Feb
(43) |
Mar
(86) |
Apr
(185) |
May
(44) |
Jun
(58) |
Jul
(41) |
Aug
(47) |
Sep
(52) |
Oct
(49) |
Nov
(47) |
Dec
(66) |
| 2011 |
Jan
(58) |
Feb
(33) |
Mar
(37) |
Apr
(31) |
May
(8) |
Jun
(8) |
Jul
(2) |
Aug
(28) |
Sep
(75) |
Oct
(46) |
Nov
(40) |
Dec
(7) |
| 2012 |
Jan
(61) |
Feb
(32) |
Mar
(20) |
Apr
(6) |
May
(11) |
Jun
(8) |
Jul
(1) |
Aug
(16) |
Sep
(21) |
Oct
(12) |
Nov
(12) |
Dec
(1) |
| 2013 |
Jan
(15) |
Feb
(8) |
Mar
(21) |
Apr
(25) |
May
(18) |
Jun
(20) |
Jul
(21) |
Aug
|
Sep
(1) |
Oct
(9) |
Nov
(10) |
Dec
(13) |
| 2014 |
Jan
(33) |
Feb
(41) |
Mar
(10) |
Apr
(44) |
May
(3) |
Jun
|
Jul
(6) |
Aug
(2) |
Sep
(1) |
Oct
(7) |
Nov
(10) |
Dec
(12) |
| 2015 |
Jan
(1) |
Feb
(17) |
Mar
(8) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2016 |
Jan
(5) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
(5) |
Aug
|
Sep
(1) |
Oct
(2) |
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
|
|
2
|
3
|
4
(2) |
5
(8) |
6
|
7
(1) |
8
(3) |
|
9
(1) |
10
(3) |
11
|
12
(3) |
13
(2) |
14
(1) |
15
(1) |
|
16
|
17
(5) |
18
(2) |
19
(2) |
20
|
21
|
22
(3) |
|
23
(15) |
24
(2) |
25
|
26
|
27
|
28
|
29
|
|
30
(3) |
31
(1) |
|
|
|
|
|
|
From: Gilles E. <g....@fr...> - 2011-01-31 06:35:36
|
----- Original Message ----- From: "Olaf Westrik" <wei...@ip...> To: "Gilles Espinasse" <g....@fr...> Cc: "IPCOP devel" <ipc...@li...> Sent: Sunday, January 30, 2011 2:18 PM Subject: Re: [IPCop-devel] random error may happen in md5sum check > On 2011-01-30 13:44, Gilles Espinasse wrote: > > > I am unsure this happen to me on Debian v5 bash version 3.2.39(1)-release > > (i486-pc-linux-gnu) > > It does happen from time to time for me. > > > > Did that happen sometime to your build? > > Yes, I estimate 1 failure in every 50 builds. > > I also tried several things some time ago, but could not find reason for > the error. > I let one machine run, rebuilding glib in a loop, but had again no failure after 100 attempts. I choose glib only because on a build where only glib has been rebuild, I had a failure in traceroute md5 check. If we don't know how to reproduce easily, we can't investigate much. Gilles |
|
From: Olaf W. <wei...@ip...> - 2011-01-30 13:18:44
|
On 2011-01-30 13:44, Gilles Espinasse wrote: > I am unsure this happen to me on Debian v5 bash version 3.2.39(1)-release > (i486-pc-linux-gnu) It does happen from time to time for me. > Did that happen sometime to your build? Yes, I estimate 1 failure in every 50 builds. I also tried several things some time ago, but could not find reason for the error. Olaf |
|
From: Gilles E. <g....@fr...> - 2011-01-30 13:00:35
|
----- Original Message -----
From: "Gilles Espinasse" <g....@fr...>
To: "IPCOP devel" <ipc...@li...>
Sent: Sunday, January 30, 2011 1:44 PM
Subject: [IPCop-devel] random error may happen in md5sum check
> Then I try
> for i in {1..100}; do echo "attempt $i ..."; (./make.sh build ||
break);echo
> 3 > /proc/sys/vm/drop_caches; done
> It took 35 s per loop, but I still don't had any failure.
>
the non-root way is
for i in {1..100}; do echo "attempt $i ..."; (./make.sh build || break);
sudo sh -c 'echo
3 > /proc/sys/vm/drop_caches'; done
Gilles
|
|
From: Gilles E. <g....@fr...> - 2011-01-30 12:48:06
|
It happen to me on different machines that './make.sh build' may fail during
md5 checking with
ERROR: md5sum error in <package>, check file in cache or signature
When looking in preparation.log, <package> checksum ok is found for the
package.
That should really never happen, as if checksum is ok, no error should have
been returned, so build should not have stopped.
Packages where the issue happen are very random.
It may happen in one build in maybe 20 or 40 attempts, just happen me twice
yesterday.
I don't know if this is a bash issue.
I have seen that since many month ago.
I happen to me actually on Ubuntu-10.04 bash version 4.1.5(1)-release
(i486-pc-linux-gnu),
Gentoo-2011-1 (hardened) bash version 4.1.7(1)-release (i686-pc-linux-gnu)
I am unsure this happen to me on Debian v5 bash version 3.2.39(1)-release
(i486-pc-linux-gnu)
Some weeks ago, I hacked make.sh to have more clue around that code, not
running packages build (because it consume time as rebuild everytime), and
let run make.sh build in an loop, but I was unable to reproduce, maybe due
to the changed code.
I try to reproduce today without a better result.
I try first
for i in {1..100}; do echo "attempt $i ..."; ./make.sh build || break; done
with packages_build commented out, loop is very fast but I don't had any
failure.
Then I try
for i in {1..100}; do echo "attempt $i ..."; (./make.sh build || break);echo
3 > /proc/sys/vm/drop_caches; done
It took 35 s per loop, but I still don't had any failure.
Then I go back to unmodified make.sh that run packages_build in a loop
for i in {1..60}; do echo "attempt $i ..."; ./make.sh build || break; done
with still no random error
Did that happen sometime to your build?
Gilles
|
|
From: Gilles E. <g....@fr...> - 2011-01-24 21:32:40
|
----- Original Message ----- From: "Gilles Espinasse" <g....@fr...> To: "Olaf Westrik" <wei...@ip...> Cc: <ipc...@li...> Sent: Sunday, January 23, 2011 10:58 PM Subject: Re: [IPCop-devel] ccache-3 upgrade ccache 1GB default size is too small. I had toolchain compiled twice in a row and a cache miss of 1 for glibc on the second run! But if I do ./make.sh clean && ./make.sh toolchain && ./make.sh build ./make.sh clean && ./make.sh toolchain && ./make.sh build On the second pass, the miss count for glibc (toolchain) is 4802. As ccache size is near the limit of 1 GB, that mean the previous 4801 hits have been evicted from cache. I will restart with a clean ccache and a 5 GB size limit and see which size the cache reach. Then I should look how much time it cost when ccache compression option is activated. Gilles |
|
From: Gilles E. <g....@fr...> - 2011-01-24 07:57:56
|
----- Original Message ----- From: "Gilles Espinasse" <g....@fr...> To: "Olaf Westrik" <wei...@ip...> Cc: <ipc...@li...> Sent: Sunday, January 23, 2011 10:58 PM Subject: Re: [IPCop-devel] ccache-3 upgrade Probably much easier to use a separate log file for ccache stats Attached the changed Config (and simplified a bit as we don't need to check if ccache is present after toolchain) I commited the gcc triplet symlink creation, that work fine (not yet tested after toolchain). Gilles |
|
From: David W S. <avi...@ai...> - 2011-01-23 22:38:01
|
On 1/23/2011 2:03 PM, Gilles Espinasse wrote: > > ----- Original Message ----- > From: "David W Studeman"<avi...@ai...> > To:<ipc...@li...> > Sent: Sunday, January 23, 2011 6:15 PM > Subject: Re: [IPCop-devel] ccache-3 upgrade > > >>>>> What guess configure for build? >>>>> That should be on top of log, line 7. >>>> >>>> Yes, line 7 indeed as you say: >>>> >>>> checking build system type... i686-pc-linux-gnu >>>> >>> so this is a 32-bits distrib and a 64-bits cpu? >>> >>> It work for me unchanged on that case, ubuntu-10.04 32-bits amd64 cpu. >>> >>> Gilles >> >> No, it is a 64 bit distro as well. >> > You had a linux-32 shell open? > or a 64-bits cpu should have been found. > > This is not needed to open a 32-bits shell. > toolchain will compile from a 64-bits shell. > > Gilles > No, the 64 bits should have been found. It's been quite some time since I had to open a 32 bit shell for 1.9.x. The last good 64 bit toolchain compilation I had was on January 11th. -- Dave Studeman http:/www.raqcop.com |
|
From: Gilles E. <g....@fr...> - 2011-01-23 22:08:05
|
----- Original Message ----- From: "David W Studeman" <avi...@ai...> To: <ipc...@li...> Sent: Sunday, January 23, 2011 6:15 PM Subject: Re: [IPCop-devel] ccache-3 upgrade > >> > What guess configure for build? > >> > That should be on top of log, line 7. > >> > >> Yes, line 7 indeed as you say: > >> > >> checking build system type... i686-pc-linux-gnu > >> > > so this is a 32-bits distrib and a 64-bits cpu? > > > > It work for me unchanged on that case, ubuntu-10.04 32-bits amd64 cpu. > > > > Gilles > > No, it is a 64 bit distro as well. > You had a linux-32 shell open? or a 64-bits cpu should have been found. This is not needed to open a 32-bits shell. toolchain will compile from a 64-bits shell. Gilles |
|
From: Gilles E. <g....@fr...> - 2011-01-23 22:01:44
|
----- Original Message ----- From: "Gilles Espinasse" <g....@fr...> To: "Olaf Westrik" <wei...@ip...> Cc: <ipc...@li...> Sent: Sunday, January 23, 2011 10:48 PM Subject: Re: [IPCop-devel] ccache-3 upgrade Attached is the changed Config with reset before build and stat output at end of package build. Gilles |
|
From: Gilles E. <g....@fr...> - 2011-01-23 21:52:08
|
----- Original Message ----- From: "Olaf Westrik" <wei...@ip...> To: "Gilles Espinasse" <g....@fr...> Cc: <ipc...@li...> Sent: Sunday, January 23, 2011 9:11 PM Subject: Re: [IPCop-devel] ccache-3 upgrade > On 2011-01-23 18:57, Gilles Espinasse wrote: > > >> Speed is similar to what it used to be, ccache is being used. > >> Looks good to me :-) > >> > > good if it's similar. > > but that could be better because ccache is not used on glibc at least. > > binutils and gcc are notably faster on second build, so ccache should work > > there. > > Here is a side to side compare from my first and second run. > I see a (significant) speed increase after gcc, not before. I'll add > ccache statistics code in PREBUILD and POSTBUILD and rebuild. > > > stage2 [ 11 ] [ 1 ] > linux-headers [ 33 ] [ 32 ] this one is mostly not compilation, but a big file opening and patching > glibc [ 522 ] [ 515 ] > binutils [ 65 ] [ 63 ] > gmp [ 29 ] [ 20 ] gmp use gcc, I was unsure from the numbers > gcc [ 225 ] [ 222 ] So we could accelerate with a working ccache for glibc, binutils, gcc. Actually ccache does nothing for the three, except for (binutils,gcc) toolchain pass1 where we don't use --build so gcc is used and ccache work. --host and --build are needed on glibc as we compile for a cpu that is not the building machine. So we need to install a LFS-TGT symlink after gcc pass1 and a TARGET_2 symlink after gcc pass2. Should be doable. I send a message on ccache mailing list with that idea, I will see if another solution is found. Gilles |
|
From: Olaf W. <wei...@ip...> - 2011-01-23 20:11:23
|
On 2011-01-23 18:57, Gilles Espinasse wrote: >> Speed is similar to what it used to be, ccache is being used. >> Looks good to me :-) >> > good if it's similar. > but that could be better because ccache is not used on glibc at least. > binutils and gcc are notably faster on second build, so ccache should work > there. Here is a side to side compare from my first and second run. I see a (significant) speed increase after gcc, not before. I'll add ccache statistics code in PREBUILD and POSTBUILD and rebuild. stage2 [ 11 ] [ 1 ] linux-headers [ 33 ] [ 32 ] glibc [ 522 ] [ 515 ] tzdata [ 1 ] [ 0 ] adjust-toolchain [ 0 ] [ 1 ] zlib [ 6 ] [ 1 ] binutils [ 65 ] [ 63 ] gmp [ 29 ] [ 20 ] mpfr [ 26 ] [ 18 ] gcc [ 225 ] [ 222 ] sed [ 18 ] [ 13 ] pkg-config [ 23 ] [ 13 ] ncurses [ 41 ] [ 20 ] util-linux-ng [ 50 ] [ 24 ] e2fsprogs [ 38 ] [ 15 ] coreutils [ 99 ] [ 59 ] iana-etc [ 1 ] [ 1 ] m4 [ 38 ] [ 24 ] bison [ 34 ] [ 18 ] procps [ 7 ] [ 5 ] grep [ 37 ] [ 25 ] readline [ 12 ] [ 9 ] bash [ 82 ] [ 36 ] libtool [ 11 ] [ 9 ] perl [ 321 ] [ 166 ] autoconf [ 4 ] [ 4 ] automake [ 3 ] [ 3 ] bzip2 [ 6 ] [ 2 ] diffutils [ 36 ] [ 28 ] gawk [ 20 ] [ 11 ] findutils [ 32 ] [ 25 ] flex [ 13 ] [ 10 ] gettext [ 154 ] [ 105 ] groff [ 45 ] [ 28 ] gzip [ 25 ] [ 22 ] iproute2 [ 14 ] [ 6 ] kbd [ 12 ] [ 11 ] less [ 9 ] [ 7 ] make [ 13 ] [ 11 ] module-init-tools [ 9 ] [ 4 ] patch [ 10 ] [ 6 ] psmisc [ 11 ] [ 8 ] rsyslog [ 29 ] [ 20 ] shadow [ 30 ] [ 28 ] strace [ 14 ] [ 12 ] sysvinit [ 7 ] [ 2 ] tar [ 58 ] [ 47 ] texinfo [ 25 ] [ 16 ] udev [ 10 ] [ 8 ] vim [ 29 ] [ 18 ] ipcop [ 1 ] [ 0 ] sysfsutils [ 11 ] [ 11 ] which [ 6 ] [ 5 ] net-tools [ 13 ] [ 4 ] libusb [ 11 ] [ 9 ] libpcap [ 15 ] [ 6 ] libxml2 [ 60 ] [ 18 ] linux-atm [ 57 ] [ 26 ] ppp [ 10 ] [ 5 ] rp-pppoe [ 10 ] [ 6 ] unzip [ 20 ] [ 2 ] linux [ 1182 ] [ 305 ] CnxADSL [ 30 ] [ 6 ] firmware-extractor [ 1 ] [ 0 ] pulsar [ 4 ] [ 1 ] solos-pci [ 2 ] [ 2 ] wanpipe [ 242 ] [ 126 ] |
|
From: Gilles E. <g....@fr...> - 2011-01-23 18:01:07
|
----- Original Message -----
From: "Olaf Westrik" <wei...@ip...>
To: "Gilles Espinasse" <g....@fr...>
Cc: <ipc...@li...>
Sent: Sunday, January 23, 2011 5:46 PM
Subject: Re: [IPCop-devel] ccache-3 upgrade
> On 2011-01-23 10:48, Gilles Espinasse wrote:
>
> Speed is similar to what it used to be, ccache is being used.
> Looks good to me :-)
>
good if it's similar.
but that could be better because ccache is not used on glibc at least.
binutils and gcc are notably faster on second build, so ccache should work
there.
I added in make.sh chroot_make (with the other variables)
CCACHE_LOGFILE=/usr/src/log_i486/ccache.log \
With this oneliner to parse the log
grep 'Working directory' log_i486/ccache.log | sed 's|[ ]*\]|]|g' | awk
'{print $5}' | sort | uniq >mylog
you could see that glibc is never the working directory (tested on
toolchain)
The sed is because the hit number on ccache is left aligned and the full
number has no space before ]
The reason is we never use gcc (linked to ccache) but directly
i486-linux-gnu-gcc. So ccache is out of circuit actually for glibc
That issue should be older than your TARGET change. Probably this come from
the time I made the cross pass1 working.
There is probably more than one way to fix that :
- symlink i486-linux-gnu-gcc to ccache come first on my mind (untested)
But that name come first from the host and we would need to change links
after each gcc install.
I had think even before ccache upgrade to add a ccache stat output after
each package compilation. That mean too adding a stats reset before each
compilation (so on PREBUILD/POSTBUILD). That's a 4 lines change only, easy
to test.
Then that would easier to parse the stats per package.
In ccache.log, I have seen too some curious gcc -m64 in linux-headers (that
fail) . I am surprised to see that on a 32-bits compilation. I haven't look
more in details.
Gilles
|
|
From: David W S. <avi...@ai...> - 2011-01-23 17:15:22
|
Gilles Espinasse wrote: > > ----- Original Message ----- > From: "David W Studeman" <avi...@ai...> > To: <ipc...@li...> > Sent: Sunday, January 23, 2011 5:24 PM > Subject: Re: [IPCop-devel] ccache-3 upgrade > > >> Gilles Espinasse wrote: >> >> >> > I was having difficulty building a toolchain as of recent. It would >> >> > fail at xy. I'll try it again from toolchain to clean to build. >> >> > >> >> Oops, I meant XZ and it the toolchain still fails there on my 64 bit >> > system. >> >> Note that I do actually have xz and xz devel installed here. >> >> >> > You should not need that. >> > What guess configure for build? >> > That should be on top of log, line 7. >> >> Yes, line 7 indeed as you say: >> >> checking build system type... i686-pc-linux-gnu >> > so this is a 32-bits distrib and a 64-bits cpu? > > It work for me unchanged on that case, ubuntu-10.04 32-bits amd64 cpu. > > Gilles No, it is a 64 bit distro as well. -- Dave http://www.raqcop.com |
|
From: Gilles E. <g....@fr...> - 2011-01-23 17:03:44
|
----- Original Message ----- From: "David W Studeman" <avi...@ai...> To: <ipc...@li...> Sent: Sunday, January 23, 2011 5:24 PM Subject: Re: [IPCop-devel] ccache-3 upgrade > Gilles Espinasse wrote: > > >> > I was having difficulty building a toolchain as of recent. It would > >> > fail at xy. I'll try it again from toolchain to clean to build. > >> > > >> Oops, I meant XZ and it the toolchain still fails there on my 64 bit > > system. > >> Note that I do actually have xz and xz devel installed here. > >> > > You should not need that. > > What guess configure for build? > > That should be on top of log, line 7. > > Yes, line 7 indeed as you say: > > checking build system type... i686-pc-linux-gnu > so this is a 32-bits distrib and a 64-bits cpu? It work for me unchanged on that case, ubuntu-10.04 32-bits amd64 cpu. Gilles |
|
From: Olaf W. <wei...@ip...> - 2011-01-23 16:46:47
|
On 2011-01-23 10:48, Gilles Espinasse wrote: >> Since the ccache changes building is considerably slower for me. >> I have not yet run comparing tests, but I estimate the build to take >> twice the time it used to. >> Has anyone else noticed the same? >> > I had the same mind and started to look yesterday. > I think this is at least twice slower than previously using ccache-1.4. With the modification from SVN 5366 I did ccache clean & rebuild toolchain. Then I ran clean / build twice. First pass took 109 minutes and 23 seconds cache hit (direct) 354 cache hit (preprocessed) 71 cache miss 22897 called for link 2418 multiple source files 22 compile failed 972 preprocessor error 461 bad compiler arguments 242 unsupported source language 185 autoconf compile/link 6226 unsupported compiler option 4672 no input file 1696 files in cache 54469 cache size 420.8 Mbytes max cache size 976.6 Mbytes Second pass took 61 minutes and 25 seconds cache hit (direct) 21343 cache hit (preprocessed) 1810 cache miss 129 called for link 2395 multiple source files 22 compile failed 972 preprocessor error 461 bad compiler arguments 242 unsupported source language 185 autoconf compile/link 6226 unsupported compiler option 4672 no input file 1696 files in cache 55702 cache size 427.7 Mbytes max cache size 976.6 Mbytes Speed is similar to what it used to be, ccache is being used. Looks good to me :-) Olaf |
|
From: David W S. <avi...@ai...> - 2011-01-23 16:24:42
|
Gilles Espinasse wrote: > > ----- Original Message ----- > From: "David W Studeman" <avi...@ai...> > To: <ipc...@li...> > Sent: Sunday, January 23, 2011 3:29 AM > Subject: Re: [IPCop-devel] ccache-3 upgrade > > >> David W Studeman wrote: >> >> > David W Studeman wrote: >> > >> >> On 1/22/2011 1:57 PM, Olaf Westrik wrote: >> >>> Since the ccache changes building is considerably slower for me. >> >>> I have not yet run comparing tests, but I estimate the build to take >> >>> twice the time it used to. >> >>> Has anyone else noticed the same? >> >>> >> >>> >> >>> Olaf >> >> >> >> I'll have to pay closer attention. I know on my usual build host IPCop > 2 >> >> takes longer than 1.4 but still can spit out a shiny iso in short > order. >> >> Are you talking about just cleaning and not a new toolchain each time? >> >> Do you build locally? I just started a build on a QUAD core Phenom >> >> with my build partition as Reiser after cleaning first. Should take an >> >> hour or so. This is a 64 bit version of OpenSuSE. >> > >> > Ok, it took this long by cleaning and using a preexisting toolchain: >> > >> > 51M /home/ipcop/cobaltsvn/ipcop-1.9.19-install-cd.i486.iso >> > ... which took: 73 minutes and 35 seconds >> > >> > I was having difficulty building a toolchain as of recent. It would >> > fail at xy. I'll try it again from toolchain to clean to build. >> > >> Oops, I meant XZ and it the toolchain still fails there on my 64 bit > system. >> Note that I do actually have xz and xz devel installed here. >> > You should not need that. > What guess configure for build? > That should be on top of log, line 7. Yes, line 7 indeed as you say: checking build system type... i686-pc-linux-gnu -- Dave http://www.raqcop.com |
|
From: Gilles E. <g....@fr...> - 2011-01-23 13:07:48
|
----- Original Message -----
From: "David W Studeman" <avi...@ai...>
To: <ipc...@li...>
Sent: Sunday, January 23, 2011 3:29 AM
Subject: Re: [IPCop-devel] ccache-3 upgrade
> David W Studeman wrote:
>
> > David W Studeman wrote:
> >
> >> On 1/22/2011 1:57 PM, Olaf Westrik wrote:
> >>> Since the ccache changes building is considerably slower for me.
> >>> I have not yet run comparing tests, but I estimate the build to take
> >>> twice the time it used to.
> >>> Has anyone else noticed the same?
> >>>
> >>>
> >>> Olaf
> >>
> >> I'll have to pay closer attention. I know on my usual build host IPCop
2
> >> takes longer than 1.4 but still can spit out a shiny iso in short
order.
> >> Are you talking about just cleaning and not a new toolchain each time?
> >> Do you build locally? I just started a build on a QUAD core Phenom with
> >> my build partition as Reiser after cleaning first. Should take an hour
> >> or so. This is a 64 bit version of OpenSuSE.
> >
> > Ok, it took this long by cleaning and using a preexisting toolchain:
> >
> > 51M /home/ipcop/cobaltsvn/ipcop-1.9.19-install-cd.i486.iso
> > ... which took: 73 minutes and 35 seconds
> >
> > I was having difficulty building a toolchain as of recent. It would fail
> > at xy. I'll try it again from toolchain to clean to build.
> >
> Oops, I meant XZ and it the toolchain still fails there on my 64 bit
system.
> Note that I do actually have xz and xz devel installed here.
>
You should not need that.
What guess configure for build?
That should be on top of log, line 7.
Maybe this change will work
Index: lfs/xz
===================================================================
--- lfs/xz (revision 5365)
+++ lfs/xz (working copy)
@@ -88,8 +88,13 @@
@rm -rf $(DIR_APP) && cd $(DIR_SRC) && tar jxf $(DIR_DL)/$(DL_FILE)
ifeq "$(STAGE)" "toolchain"
- cd $(DIR_APP) &&
./configure --prefix=/$(TOOLS_DIR) --disable-static
+ifeq "$(PASS)" "1"
+ cd $(DIR_APP) &&
./configure --build=$(MACHINE_REAL)-linux --prefix=/$(TOOLS_DIR) --disabl
e-static
endif
+ifeq "$(PASS)" "2"
+ cd $(DIR_APP) &&
./configure --build=$(LFS_TGT) --prefix=/$(TOOLS_DIR) --disable-static
+endif
+endif
ifeq "$(STAGE)" "base"
cd $(DIR_APP) && ./configure --prefix=/usr \
But you will need to pass MACHINE_REAL in make.sh toolchain_make, add
MACHINE_REAL="${MACHINE_REAL}" \
Gilles
|
|
From: Gilles E. <g....@fr...> - 2011-01-23 09:52:41
|
----- Original Message -----
From: "Olaf Westrik" <wei...@ip...>
To: <ipc...@li...>
Sent: Saturday, January 22, 2011 10:57 PM
Subject: Re: [IPCop-devel] ccache-3 upgrade
> Since the ccache changes building is considerably slower for me.
> I have not yet run comparing tests, but I estimate the build to take
> twice the time it used to.
> Has anyone else noticed the same?
>
>
> Olaf
>
I had the same mind and started to look yesterday.
I think this is at least twice slower than previously using ccache-1.4.
First there has been some changes that invalidate the cache, with hardening
and gcc changes, so you have to rebuild once after all those changes to feed
the cache and another time to have hit from cache. As I changed gcc
compilation rcently to disable mudflap and openmp (as we don't use that),
this is normal that there is new gcc hash value during first compilation but
that should be faster for next compilations. In fact, that does not look to
be the case after toolchain.
I started to look at what is wrong. I should say for now I don't know.
At first glance, looking in preparation.log; CCACHE_COMPILER_CHECK signature
is the same at each build (and even does not vary from machine to machine
except the first from gcc host before gcc pass1 is compiled), so the issue
should not be there.
Looking in log when compilation end, _build_ccache show that hit ratio is
very low (with 3 more miss than hit).
cache hit (direct) 525
cache hit (preprocessed) 7606
cache miss 23158
called for link 3195
multiple source files 22
compile failed 1542
preprocessor error 622
bad compiler arguments 332
unsupported source language 359
autoconf compile/link 9119
unsupported compiler option 5905
no input file 2388
In contrary, for toolchain build, stat look good with 10 more hit than miss
cache hit (direct) 6634
cache hit (preprocessed) 661
cache miss 497
called for link 838
compile failed 571
preprocessor error 162
bad compiler arguments 90
unsupported source language 12
autoconf compile/link 2896
unsupported compiler option 1243
no input file 347
I think I know what happen as I am working on cleaning env creation on
toolchain_make and chroot_make.
In toolchain, we do make -f $* install list-of-var
In later stages, we do
chroot $LFS env -i list-of-var bash -x -c "cd /usr/src/lfs && make -f $*
LFS_BASEDIR=/usr/src install"
The difference between variables defined before or after the make is that in
the 'before' case, sub-makefile did not inherit those values unless they are
exported in master script
(and we do not use env -i), CCACHE_DIR is exported in make.sh
During toolchain build, actually CCACHE_DIR exist for lfs files and
sub-makefile.
During chroot_make, CCACHE_DIR only exist for lfs files and not for
sub-makefile.
I suppose with ccache-3, we need to have CCACHE_DIR available on
sub-makefile.
But when I shift CCACHE_DIR to the 'after' make part, it does not look to
work when checking from another shell, looking with
CCACHE_DIR=./ccache build_i486/tools_i486/usr/bin/ccache -s
I am lookink at the log produced when adding
CCACHE_LOGFILE=/usr/src/log_${MACHINE}/ccache.log
to the variables given to chroot_make to debug what happen after toolchain.
Gilles
|
|
From: David W S. <avi...@ai...> - 2011-01-23 02:29:41
|
David W Studeman wrote: > David W Studeman wrote: > >> On 1/22/2011 1:57 PM, Olaf Westrik wrote: >>> Since the ccache changes building is considerably slower for me. >>> I have not yet run comparing tests, but I estimate the build to take >>> twice the time it used to. >>> Has anyone else noticed the same? >>> >>> >>> Olaf >> >> I'll have to pay closer attention. I know on my usual build host IPCop 2 >> takes longer than 1.4 but still can spit out a shiny iso in short order. >> Are you talking about just cleaning and not a new toolchain each time? >> Do you build locally? I just started a build on a QUAD core Phenom with >> my build partition as Reiser after cleaning first. Should take an hour >> or so. This is a 64 bit version of OpenSuSE. > > Ok, it took this long by cleaning and using a preexisting toolchain: > > 51M /home/ipcop/cobaltsvn/ipcop-1.9.19-install-cd.i486.iso > ... which took: 73 minutes and 35 seconds > > I was having difficulty building a toolchain as of recent. It would fail > at xy. I'll try it again from toolchain to clean to build. > Oops, I meant XZ and it the toolchain still fails there on my 64 bit system. Note that I do actually have xz and xz devel installed here. check/crc32_x86.S: Assembler messages: check/crc32_x86.S:96: Error: suffix or operands invalid for `push' check/crc32_x86.S:97: Error: suffix or operands invalid for `push' check/crc32_x86.S:98: Error: suffix or operands invalid for `push' check/crc32_x86.S:99: Error: suffix or operands invalid for `push' check/crc32_x86.S:132: Error: relocated field and relocation type differ in signedness check/crc32_x86.S:265: Error: suffix or operands invalid for `pop' check/crc32_x86.S:266: Error: suffix or operands invalid for `pop' check/crc32_x86.S:267: Error: suffix or operands invalid for `pop' check/crc32_x86.S:268: Error: suffix or operands invalid for `pop' make[5]: *** [liblzma_la-crc32_x86.lo] Error 1 -- Dave http://www.raqcop.com |
|
From: David W S. <avi...@ai...> - 2011-01-23 02:20:50
|
David W Studeman wrote: > On 1/22/2011 1:57 PM, Olaf Westrik wrote: >> Since the ccache changes building is considerably slower for me. >> I have not yet run comparing tests, but I estimate the build to take >> twice the time it used to. >> Has anyone else noticed the same? >> >> >> Olaf > > I'll have to pay closer attention. I know on my usual build host IPCop 2 > takes longer than 1.4 but still can spit out a shiny iso in short order. > Are you talking about just cleaning and not a new toolchain each time? > Do you build locally? I just started a build on a QUAD core Phenom with > my build partition as Reiser after cleaning first. Should take an hour > or so. This is a 64 bit version of OpenSuSE. Ok, it took this long by cleaning and using a preexisting toolchain: 51M /home/ipcop/cobaltsvn/ipcop-1.9.19-install-cd.i486.iso ... which took: 73 minutes and 35 seconds I was having difficulty building a toolchain as of recent. It would fail at xy. I'll try it again from toolchain to clean to build. -- Dave http://www.raqcop.com |
|
From: David W S. <avi...@ai...> - 2011-01-23 01:00:52
|
On 1/22/2011 1:57 PM, Olaf Westrik wrote: > Since the ccache changes building is considerably slower for me. > I have not yet run comparing tests, but I estimate the build to take > twice the time it used to. > Has anyone else noticed the same? > > > Olaf I'll have to pay closer attention. I know on my usual build host IPCop 2 takes longer than 1.4 but still can spit out a shiny iso in short order. Are you talking about just cleaning and not a new toolchain each time? Do you build locally? I just started a build on a QUAD core Phenom with my build partition as Reiser after cleaning first. Should take an hour or so. This is a 64 bit version of OpenSuSE. One thing I can tell you is that the filesystem used on the building host for the partition you build IPCop on makes a rather dramatic difference. Reiser 3.7 allows me to build the fastest of the bunch due to the speed of small file writes and is fast in general, ccache is nothing for this filesystem. XFS likes big files and was a bit slow in the standard mount options for the rapid succession of tiny files created and deleted while building IPCop but could be tuned if anyone was forced to use it. I use XFS for file serving over NFS via raid5 and for media storage on my MythTV box. EXT3 is a bit slower than I like for building IPCop but not downright awful. Hadn't tried with JFS. I had tried EXT4 and it was moderate in speed. Further drifting OT: The main draw for ext3 as I understand it was that it is backward compatible with ext2 and reliable. It sure isn't the defacto filesystem for performance. OpenSuSE switch to EXT3 as default had nothing to do with the tragic events that took place four years ago, the default had been Reiser before that, it was purely a compatibility choice. Of course the advanced user is going to pick and choose how to format their hard drive. I see a lot of talk in forums and a lot of people are led to believe that EXT3 is always the logical choice. No matter how fast your hardware is, filesystems actually have a speed limit. -- Dave Studeman http:/www.raqcop.com |
|
From: Olaf W. <wei...@ip...> - 2011-01-22 21:58:00
|
Since the ccache changes building is considerably slower for me. I have not yet run comparing tests, but I estimate the build to take twice the time it used to. Has anyone else noticed the same? Olaf |
|
From: Olaf W. <wei...@ip...> - 2011-01-22 21:38:43
|
On 2011-01-22 21:25, Gilles Espinasse wrote: > I think this is the wrong way to add arbitrary files in the list when we > know they can't be there with the code we use. FIND_FILES has > -not -path './dev*' -not -path './proc*' -not -path './sys*' OK. > At least you should add a comment that you cheat in this rootfile with the > list of special files. OK. > But can't we create them when needed (in mkinitramfs)? Sure, but I am worried that someday we might change something in installation sequence and a script is executed before mkinitramfs, breaking our installation. So why not avoid that and put it in the ISO? Olaf |
|
From: Gilles E. <g....@fr...> - 2011-01-22 20:29:01
|
----- Original Message ----- From: <ow...@us...> To: <ipc...@li...> Sent: Saturday, January 22, 2011 7:44 PM Subject: [Ipcop-svn] SF.net SVN: ipcop:[5363]ipcop/trunk/config/rootfiles/common/stage2 > Revision: 5363 > http://ipcop.svn.sourceforge.net/ipcop/?rev=5363&view=rev > Author: owes > Date: 2011-01-22 18:44:41 +0000 (Sat, 22 Jan 2011) > > Log Message: > ----------- > dev/null, proc and sys are created by init when booting, but that does not exist yet when doing install chroot. > Readd to make mkinitramfs etc. happy and we can install again. > Maybe it is good idea to remove @PREBUILD from lfs/stage2 to avoid removing in the future. > I think this is the wrong way to add arbitrary files in the list when we know they can't be there with the code we use. FIND_FILES has -not -path './dev*' -not -path './proc*' -not -path './sys*' At least you should add a comment that you cheat in this rootfile with the list of special files. But can't we create them when needed (in mkinitramfs)? Gilles |
|
From: Gilles E. <g....@fr...> - 2011-01-19 06:49:22
|
----- Original Message ----- From: akmal hanif To: John Edwards ; akmal hanif ; ipc...@li... Sent: Wednesday, January 19, 2011 3:43 AM Subject: Re: [IPCop-devel] /bin/sh: line 1: ./install: Permission denied ... >>> sorry for my stupid question here, can anybody explain why i've this error >>> message when im trying to build ipcop-1.4.21 from source >>> /bin/sh: line 1: ./install: Permission denied >>> error 126 Here, we only have a one line error that does not come from make.sh ... >>Have you read how to build IPCop 1.4.21? >> http://www.ipcop.org/development.php >> > thanks john, for replying my question > yes i already read, i already as root (sudo, su and su - root) but it is still > result permission denied.. > in 1.4.18-1.4.20 i can build with other custom addons such as asterisk, > wget,advanced proxy,and update excelarator.. but in 1.4.21 i have problem > with permission when build, if i build without any addons i success.. > you have another suggestion please, > by the way thank you john, i realy appreciate Really explain what command you use, (as which user) and what fail. And if you made changes to 1.4.21 code, explain what you. How could we guess from your first explanation that it is a modified 1.4.21? Gilles |