[go: up one dir, main page]

|
|
Log in / Subscribe / Register

BFS vs. mainline scheduler benchmarks and measurements

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 7, 2009 13:37 UTC (Mon) by mingo (subscriber, #31122)
In reply to: BFS vs. mainline scheduler benchmarks and measurements by kragil
Parent article: BFS vs. mainline scheduler benchmarks and measurements

What I take from this discussion is that Kernel devs live in a world where Intels fastest chips in multi socket systems are low end and they will cater only to the enterprise bullcrap that pays their bills.

I certainly dont live in such a world and i use a bog standard dual core system as my main desktop. I also have a 833 MHz Pentium-3 laptop that i booted into a new kernel 4 times today alone:

  #0, d5f8b495, Mon_Sep__7_08_39_36_CEST_2009: 0 kernels/hour
  #1, b9e808ca, Mon_Sep__7_09_19_47_CEST_2009: 1 kernels/hour
  #2, b9e808ca, Mon_Sep__7_10_26_28_CEST_2009: 1 kernels/hour
  #3, b9e808ca, Mon_Sep__7_14_58_48_CEST_2009: 0 kernels/hour

  $ head /proc/cpuinfo 
  processor	: 0
  vendor_id	: GenuineIntel
  cpu family	: 6
  model		: 8
  model name	: Pentium III (Coppermine)
  stepping	: 10
  cpu MHz	: 846.242
  cache size	: 256 KB

  $ uname -a
  Linux m 2.6.31-rc9-tip-01360-gb9e808c-dirty #1178 SMP Mon Sep 7 22:38:18 CEST 2009 i686 i686 i386 GNU/Linux

And that test-system does that every day - today isnt a special day. Look at the build count: #1178. This means that i booted more than a thousand development kernels on this system already.

Now, to reply to your suggestion: for scheduler performance i picked the 8 core system because that's where i do scheduler tests: it allows me to characterise that system _and_ also allows me to characterise lower performance systems to a fair degree.

Check out the updated jpgs with quad-core results.

See how similar the single-socket quad results are to the 8-core results i posted initially? People who do scheduler development do this trick frequently: most of the "obvious" results can be downscaled as a ballpark figure.

(the reason for that is very fundamental: you dont see new scheduler limitations pop up as you go down with the number of cores. The larger system already includes all the limitations the scheduler has on 4, 2 or 1 core, and reflects those properties already so there's no surprises. Plus, testing is a lot faster. It took me 8 hours today to get all the results from the quad system. And this is right before the 2.6.32 merge window opens, when Linux maintainers like me are very busy.)

Certainly there are borderline graphs and also trickier cases that cannot be downscaled like that, and in general 'interactivity' - i.e. all things latency related come out on smaller systems in a more pronounced way.

But when it comes to scheduler design and merge decisions that will trickle down and affect users 1-2 years down the line (once it gets upstream, once distros use the new kernels, once users install the new distros, etc.), i have to "look ahead" quite a bit (1-2 years) in terms of the hardware spectrum.

Btw., that's why the Linux scheduler performs so well on quad core systems today - the groundwork for that was laid two years ago when scheduler developers were testing on a quads. If we discovered fundamental problems on quads _today_ it would be way too late to help Linux users.

Hope this explains why kernel devs are sometimes seen to be ahead of the hardware curve. It's really essential, and it does not mean we are detached from reality.

In any case - if you see any interactivity problems, on any class of systems, please do report them to lkml and help us fix them.


to post comments

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 8, 2009 8:46 UTC (Tue) by kragil (guest, #34373) [Link] (2 responses)

Reading all your answers calmed me down a bit :) Thanks

I think our major disagreement here is the "look ahead".

I strongly believe that computers have reached the point where this relentless upgrade cycle should and has stopped. If you bought a P4 with HT and 1 GB in 2003 it is still perfectly capable of running the newest software 95% of desktop users need. Machines like that can turn 7 YEARS soon. People will look for computers that use less engery and don't have moving parts that just break after a few years.
PCs will be like old TV sets and work for many many years (10 to 15 years). The software has to adapt. That is the "look ahead" I see, but I can understand why Red Hat plans for something different.

I think faster ARM,Mips and Atom CPUs are the architecture most desktop Linux kernels will run on and the relative percentage of X-core X86 monsters will decline (maybe even rapidly).

And no I don't think Fedoras smolt data is any good here. Fedora users are technical people and are unlikely to run really old hardware like my sisters for example.

I also don't think Linux will ever get problems with the fastest computers, its dominance in the HPC area will make sure of that.

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 8, 2009 9:30 UTC (Tue) by mingo (subscriber, #31122) [Link] (1 responses)

And no I don't think Fedoras smolt data is any good here. Fedora users are technical people and are unlikely to run really old hardware like my sisters for example.

That's all fine and i have a Fedora Core 6 box too on old hardware - which is very old.

I wouldnt upgrade the kernel on it though - and non-technical users would do that even less likely. Software and hardware is in a single unit and for similar reasons it is hard to upgrade hardware is it difficult to upgrade software as well. Yes, you pick up security fixes, etc. - but otherwise main components like the kernel tend to be cast into stone at install time. (And no, if you are reading this on LWN.Net then your box probably does not qualify ;-)

Which means that most of the 4 years old systems have a 4 years old distribution on them, with a 4 years old kernel. That kernel was developed 5 years ago and any deep scheduler decisions were done 6 years ago or even later.

So yes, i agree that the upgrade treadmill has to stop eventually, but _I_ cannot make it stop - i just observe reality and adopt to it. I see what users do, i see what vendors do and i try to develop the kernel in the best possible technical way, matching those externalities.

What i'm seeing right now as the scheduler and as the x86 co-maintainer is that the hardware side shows no signs of slowing down and that users who are willing to install new kernels show eagerness to buy shiny new hardware. Quads yesterday, six-cores today, opto-cores in a year or two.

Most of the new kernel installs goes to fresh new systems, so that's an important focus of the upstream kernel - and of any distribution maker. That is the space where we _can_ do something realistically and if we did something else we'd be ignoring our users.

I could certainly be wrong about all that in some subtle (or not so subtle) way - but right now the fact is that most of the bugreports i get against development code we release is done on relatively new hardware.

That is natural to a certain degree - new hardware triggers new, previously unknown limitations and bottlenecks, and new hardware has its own problems too that gets mixed into kernel problems, etc. Old hardware is also already settled into its workload so there's little reason to upgrade an old, working box in general. There's also the built-in human excitement factor that shiny new hardware triggers on a genetic level ;-)

There's an easy way out though: please report bugs on old hardware and make old hardware count. The mainline kernel can only recognize and consider people who are willing to engage. The upstream kernel process is a fundamentally auto-tuning and auto-correcting mechanism and it is mainly influenced by people willing to improve code.

BFS vs. mainline scheduler benchmarks and measurements

Posted Sep 9, 2009 11:41 UTC (Wed) by nix (subscriber, #2304) [Link]

Well, I'm a counterexample: I upgrade my hardware every decade, if that, but the kernels are normally as new as possible, because I'd like newish software, thanks, and that often likes new kernels. Further, everyone I know who isn't made of money and runs Linux does the same thing: they tend to run Fedora, recentish Ubuntu, or Debian testing, because non-enterprise users generally do not want to run enterprise distros because all the software on them is ancient, and non-enterprise distro kernels *do* get upgraded.

I suspect your argument is pretty much only true for corporate uses of Linux (i.e. 'just work with *this* set of software', as opposed to other uses which often involve installation of new stuff). But perhaps those are the only uses that matter to you...


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds