[go: up one dir, main page]

|
|
Log in / Subscribe / Register

The 2.6.20 cycle begins

Toward the end of the 2.6.19 cycle, there was a brief linux-kernel discussion on whether 2.6.20 should be a bugfix-only release. Just in case anybody thought that might actually happen, the patches merged for 2.6.20 will make the situation clear. There will be a lot of new stuff in the next stable kernel release.

That said, the rate of patches into the kernel has been lower than in some previous cycles. It may be that the workqueue patches have created some conflicts which are slowing things down.

As of this writing, the user-visible changes merged include:

  • New drivers for NetXen 1G/10G Ethernet controllers, Atmel MACB Ethernet modules, Tsi108/9 Ethernet controllers, and Chelsio Ethernet controllers (but without TCP offload support).

  • Numerous serial and parallel ATA driver improvements.

  • SCSI busses can optionally be scanned asynchronously. On large systems with many SCSI peripherals, this can speed the bootstrap process considerably.

  • The set of TCP congestion control algorithms which can be selected by unprivileged process has been restricted to those which are known to be robust and fair. The system administrator can still select any algorithm supported by Linux.

  • Various improvements have been made to the DCCP code, including SELinux support.

  • Some obsolete, unsupported, and presumably unused capabilities have been removed, including the frame diverter and the floppy tape (ftape) driver.

  • MD5 protection for TCP sessions (RFC 2385) has been added; this capability is normally only used with the BGP routing protocol.

  • The UDP-Lite protocol (RFC 3828) is now supported; see the UDP-Lite page for more information on this protocol, which is oriented toward the needs of streaming multimedia applications.

Changes visible to kernel developers include:

  • The workqueue API changes have been merged, resulting in changes throughout the tree. David Howells has posted a detailed set of instructions on how to fix code broken by these changes.

  • Much of the sysfs-related code has been changed to use struct device in place of struct class_device. The latter structure will eventually go away as the class and device mechanisms are merged.

  • There is a new function:

        int device_move(struct device *dev, struct device *new_parent);
    

    This function will reparent the given device to new_parent, making the requisite sysfs changes and generating a special KOBJ_MOVE event for user space.

  • The networking subsystem has been heavily annotated for automated checking using sparse.

  • A number of kernel header files which included other headers no longer do so. For example, <linux/fs.h> no longer includes <linux/sched.h>. These changes should speed kernel build times by getting rid of large number of unneeded includes, but might break some out-of-tree modules which do not explicitly include all the headers they need.

The merge window should stay open for another week or so, so there's plenty of time for more stuff to be added. Those who can't wait might want to take a look at Andrew Morton's -mm merge plan posting for some previews of what's coming.


to post comments

The usual question

Posted Dec 7, 2006 8:31 UTC (Thu) by vblum (guest, #1151) [Link] (2 responses)

I can't resist asking the usual question, that is, what is known (if anything) on the status of Reiser4 this time around?

The usual question

Posted Dec 7, 2006 15:04 UTC (Thu) by jospoortvliet (guest, #33164) [Link]

well, andrew morton wrote he would keep it in -mm... so no progress yet.

same with the adaptive readahead patches, pity imho as they can speed up
reading and copying large files considerably.

and the swap-prefetch patches are nowhere to be seen anymore, i think that
really sucks. -ck users can still have it, of course, but i can't patch my
kernel at work, and i hate the slowness of my machine after a night
running several background tasks... i don't experience this at home -
thanx to swap prefetch. and you know why it doesn't get into mainline?
andrew doesn't think it's usefull. he asked for comments from other ppl
saying wether they like/need it, or not - not many comments, apparently.

so we'll have to live with much more sluggish systems than necessary...

The usual question

Posted Dec 14, 2006 13:15 UTC (Thu) by Duncan (guest, #6647) [Link]

I'm guessing reiser4 is basically on hold, pending resolution of the Hans
Reiser legal situation and Namesys' future (or someone else ultimately
deciding it's worth sponsoring, after all, it /is/ GPL code). Too bad, in
some ways, but it has certainly had difficulty as it is, and the current
uncertainty regarding the future of Namesys and reiser-anything isn't
likely to help.

OTOH, from some viewpoints, Hans Reiser was his own worst enemy in trying
to get this merged. It's possible that if he's out of the picture,
depending on what happens to Namesys and assuming there's no permanent
social stigma overriding the technical side, it may actually have better
merge chances with him not involved.

It's all-around unfortunate. As a reiserfs user myself, I certainly hope
he is indeed not guilty, but gets his due if indeed he did. His kids and
father and the Namesys employees... but the fact of the matter is however
unfortunate it is for them personally, for me, it'd be just another random
headline if I wasn't a reiserfs user hoping to eventually go reiser4. I
don't know anyone involved personally and really can't do anything for
them, so selfish tho it may seem, my only connection remains thru reiserfs
and an interest in reiser4, and that being the only connection, that's
what I'm concerned about. I'd still love to see it in the kernel one way
or another, eventually.

Meanwhile, it's encouraging to see that there's no talk of dropping it
from further consideration and therefore from -mm, as some have called
for. Freeze in place in terms of merge while waiting to see how events
develop, and continuing to address minor issues if the opportunity
presents itself, is indeed the most practical reaction at this point.
By .21 or .22, one hopes events have progressed well enough to make some
sense of things and continue with the merge process.

Duncan

The 2.6.20 cycle begins

Posted Dec 7, 2006 10:33 UTC (Thu) by nix (subscriber, #2304) [Link] (7 responses)

The kernel header changes also break glibc builds (in fact the changes that did this were in 2.6.19) :( it seems that some people have not yet adapted to a world where changes to exported headers are seen by userspace and need some stability.

The 2.6.20 cycle begins

Posted Dec 7, 2006 11:14 UTC (Thu) by khim (subscriber, #9252) [Link] (3 responses)

Nope. They don't. As far as GlibC is happy with kernel 2.6.18 - it should use kernel headers from 2.6.18.

It's simple, really: if some version of GlibC can use features from kernel x.y.z - it should be compatible with headers-x.y.z, if it can not - they why you are trying to compile it with headers-x.y.z ? SysCall interface is set in stone, but headers are not and one of the GlibC's tasks is to provide compatibility at the headers level for the userspace programs...

The 2.6.20 cycle begins

Posted Dec 7, 2006 11:32 UTC (Thu) by ken (subscriber, #625) [Link] (1 responses)

I guess the normal thing to do is take the latest version of both Glibc and kernel, It's not like there exist a table somewhere that list what versions are compatible.

So even if it would be perfectly OK to use a earlier kernel version it's hard to know that that is the case.

The 2.6.20 cycle begins

Posted Dec 7, 2006 12:01 UTC (Thu) by filipjoelsson (guest, #2622) [Link]

Then why, oh why, are distributors shipping a certain set of kernel headers as a separate package for a certain version of glibc?

IIRC that is how Linus said it should be done, in order to not cause breakage (around the 2.4 timeframe). A certain release of glibc depends on the headers of a certain kernel - and only package maintainers (and LFSers) need to bother. Is there a distro that still doesn't get this right?

It's not as if it's easy to build glibc against the wrong set of headers - even on gentoo.

The 2.6.20 cycle begins

Posted Dec 8, 2006 0:44 UTC (Fri) by nix (subscriber, #2304) [Link]

Um, in case it hadn't escaped your attention, but glibc's headers #include
kernel headers, *and always have*. The kernel headers *do* define a
userspace API and ABI, and simply saying that they do not is denying
reality.

The 2.6.20 cycle begins

Posted Dec 7, 2006 12:06 UTC (Thu) by dlang (guest, #313) [Link] (2 responses)

the kernel headers are not intended to be used by any userspace program, including glibc. so what 'broke' is something that's not supposed to work in the first place.

even gentoo doesn't compile anything against the kernel headers directly, they maintain a seperate package that's the headers that userspace needs extracted out of the kernel headers.

unless the kernel changes an interface that's exported to userspace (which would be a break of backwards compatability), or glibc is trying to take advantage of a new interface that's exported to userspace there is no reason to change glibc when a new kernel is installed (including no need to recompile it)

if glibc is trying to take advantage of a new interface then it's up to glibc to be able to use the right headers.

David Lang

The 2.6.20 cycle begins

Posted Dec 8, 2006 0:43 UTC (Fri) by nix (subscriber, #2304) [Link] (1 responses)

Sheesh, pay attention to developments in the kernel tree. `make
headers_check' and `make headers_install' exists since 2.6.19 and is
*explicitly intended* for generating a /usr/include/{linux,asm*,...} tree
to be used to build glibc and other programs that have to communicate
using the APIs which are, like it or not, defined in those headers
(ioctl()s anyone?).

The previous system was not scalable: even major distributors were using
ancient kernel headers because cleaning them up was too much effort for
*anyone*, and everyone else was pretty much completely screwed. People
started cleanup projects and got burned out in months. That is now past,
thank goodness.

As David Woodhouse (I think it was) put it recently, `the days of `throw
it over the wall' are over.'

The 2.6.20 cycle begins

Posted Dec 8, 2006 13:11 UTC (Fri) by dlang (guest, #313) [Link]

from what I have seen of this, the headers made by make headers_check are not yet expected to bedirectly useable (after all, this is the first release of a major new feature)

the intention is that eventually this will make a set of headers that don't need any manual masaging before they can be used, but everyone recognises that they aren't there yet, it's just a starting pooint for the distros to use when makeing their kernel headers packages

it's also hard to call the introduction of a feature like this that has never existed before a major breakage in the kernel, the new feature may not work yet, but I don't see how it can be called a regression when it never existed before.

The 2.6.20 cycle begins

Posted Dec 8, 2006 12:17 UTC (Fri) by Kluge (subscriber, #2881) [Link] (1 responses)

"There will be a lot of new stuff in the next stable kernel release."

Since there is no longer a "unstable kernel release" and the 2.6.x kernels are focusing more on features than stability, maybe we should just say "mainline kernel release"?

The next stable kernel release

Posted Dec 8, 2006 17:49 UTC (Fri) by giraffedata (guest, #1954) [Link]

I agree. If "next stable kernel release" is supposed to mean 2.6.20, it's quite a misnomer.

I would say the next stable kernel release is 2.6.16.35. The phrase could also describe 2.6.18.6.

The 2.6.20 cycle begins

Posted Dec 8, 2006 14:16 UTC (Fri) by rwmj (guest, #5474) [Link]

Didn't PS3 support go in as well?

Rich.

The 2.6.20 cycle begins

Posted Dec 9, 2006 13:18 UTC (Sat) by arvidma (guest, #6353) [Link] (1 responses)

Im curious with regard to the bigger long term plans, and to what extent they exist. Earlier on when we had the different unstable (uneven) series, there where all these big projects that undermined the enitre kernel and ended up in a noticable difference. Things like the big IDE debacle of 2.3, kernel preemption, zerocopy networking, sensible USB-support, journaling filesystems...

I kind of stopped following the kernel development during 2.4/2.5 since there wasn't much of interest (though ofcourse lots of important security fixes, drivers and such) for a regular desktop user going on.

Are there any big architectural changes on the horizon? Potential performance gains worth mentioning?

The 2.6.20 cycle begins

Posted Dec 14, 2006 12:43 UTC (Thu) by Duncan (guest, #6647) [Link]

Long reply but oh well, it's written now so I might as well post it.

In effect, Andrew's -mm kernel is the development kernel now, with major
changes being tested in it for awhile before they hit the Linus mainline.
That was the effect of the discussions at 2004's (??) OLS (Ottawa Linux
Symposium), which made official the relationship between the Andrew and
Linus kernels that had been found to work quite well since the 2.6.0-pre
cycle at least.

Besides formalizing a working relationship that had already been found to
work quite well, the change had a number of other effects, beneficial and
not so beneficial, as well.

By the end of the 2.5 cycle, there had developed a HUGE gap between stable
and unstable kernels, and distributions were filling that gap with their
own (often incompatible with others) backports of 2.5 features to 2.4
stable. What was supposed to be the Linux kernel and therefore basically
interchangeable was ending up creating distribution lockin, as users were
finding it more and more troublesome to switch distributions due to kernel
problems, or even between distributor and
mainline-plus-independent-patches kernels. This was generating complaints
from that end, while the vendors were finding it increasingly expensive to
maintain their ever bloating patch-sets as well -- it wasn't all
deliberate vendor/distribution scheming by FAR.

The kernel devs resolved to change this in the 2.6 process and what came
of it was what we have now, a development -mm and a "stable"
Linus/mainline. Vendors/distributors are STRONGLY encouraged to submit
their patches upstream and get them integrated into mainline as soon as
possible (going thru -mm where the change will be big). Where these are
controversial or there are several solutions to the same problem, LKML
debate and/or turns in the -mm tree for evaluation hashes out the details
before they go into mainline. In any case, there's STRONG encouragement
to get a solution integrated into mainline, thus shrinking the individual
trees and keeping the individual vendor/distribution and developer trees
much closer to in-sync than they were with the traditional odd/even
developer/stable split of yore. That's one of the benefits.

It should also be mentioned that a number of kernel watchers have observed
that the kernel is maturing, and as it matures, there comes a point where
while the rate of change may remain high, the biggest changes don't tend
to be as drastic or destabilizing to the entire tree. When there are big
changes, in a mature tree with decent modularization, they will tend to be
rather more self-contained and affect the tree as a whole somewhat less.
Combine this with an arguably somewhat more sane merge strategy because
they know it's not possible to have the whole thing simply not work for a
huge segment of the base, and result is a system where improvements get
rolled out to the actual user base far faster, with (arguably) a
relatively low cost in terms of destabilization. Another benefit, of
course related to the first.

On the flip side, some have argued that the process is less stable than it
should be and indeed less stable than it was historically. While
everybody agrees stability isn't perfect, others simply point to
historical big upsets like the memory management changes early in the 2.4
stable series for support of their position that stability isn't really
worse after all. They point to the relatively large changes integrated
into the 2.6 process with rather less disruption as compared to historical
changes in stable such as that, in support for the current model. Both
sides have their valid points, of course.

Meanwhile, there have been refinements in the process as time has
progressed, giving those who want super-stable a number of options, while
preserving the general model, which seems to fit the way the current set
of kernel devs wish to work quite well.

1) It was pointed out early in the process that typically, the types of
users arguing for ultra-stability are the same corporate/enterprise types
that like to purchase support from their vendors and thus have someone to
blame when things go wrong. The enterprise distributions, particularly
Red Hat and Novell, fill this niche rather well, and provide and support
their own kernel packages. Thus, these users don't tend to use
mainline/vanilla kernels anyway, but rather their respective distribution
provided kernels, which typically lag mainline by some period. This
provides its own built-in stability buffer mechanism, and statements of
the kernel devs seem to support the idea that those wishing ultra-stable
kernels should be using vendor provided kernels, not vanilla-mainline.
The effect is that there's yet another level of stability, the various
individual developer trees (call them alpha level), the development -mm
tree (call it beta), the stable 2.6.x releases, and the ultra-stable
releases further tested and provided by various vendors.

2) That didn't prove to be enough for some, particularly as it encouraged
specific vendor dependence and a degree of lockin that some of these same
enterprise customers weren't entirely comfortable with, once again. It
also quickly became apparent that the model didn't particularly well deal
with security updates between the 2-3 month stable releases, and as
concern for security extends well beyond the enterprise community,
something else was needed.

That "something else" was the 2.6.x.y fourth field point releases,
designed to be quite conservative in the patches they accepted, and
originally designed to continue not long after the release of the next
stable regular mainline kernel.

3) The latest development has been that of the 2.6.16.x series, which now
has a maintainer (Adrian Bunk) continuing it long past the original stable
team cutoff date, as they moved forward to 2.6.17 and now 2.6.18 and .19.
As he had an interest in it and volunteered, he's maintaining continuing
2.6.16.x releases for some time, at this point basically indefinitely.

Thus, 2.6.16 has become in practice the new "stable" series, API frozen,
with only security patches and the like, with certain new drivers and the
like backported as well. Those who were formerly sticking to 2.4
primarily because they weren't comfortable with the stability (or lack
thereof) in 2.6 now have a 2.6 upgrade target.

As time goes on, it's reasonably likely another continuing 2.6.x.y will
find support. Given 4-5 third-field stable releases a year, and assuming
the rate of change continues much as it has, a continued-stable release
every couple of years or roughly one every ten third-field releases, might
be considered reasonable, provided of course trusted volunteers continue
to step up to take them. Of course, that's the future and reality may yet
change out from under that sort of prediction.

Meanwhile, the original plan was that should something really radical come
along that couldn't fit in this development model, a relatively short and
specific 2.7 may eventually come to be. There's been no real pressure for
it yet, as the current model with -mm being the development branch and
keeping things otherwise much closer integrated than was the case
traditionally, has seemed to work quite well even for larger changes.
(Again, that could be partially attributable to better modularization now
than in the past.) The possibility therefore seems pretty remote for the
time being. Still, the option is there, if it's ever found to be needed.

Of course, who knows, "life happens" as they say, and reality may intrude
on the current comfortable arrangement. If some tragedy befalls Andrew or
Linus, or they simply decide they have personal disagreements or
something, or if the DRM loophole becomes big enough to BSD Linux while
the Solaris kernel adopts GPLv3 and the FSF adopts it in place of
GNU/HURD, or... well then things will get "interesting" for awhile.
Never-the-less, the community has proven extremely resilient in the past,
the xfree86/xorg and gcc/egcs changes and SCO threat being just three
examples, we'll certainly come out even stronger for it, as with xorg, no
matter the name of what happens to be the most popular kernel in the
community.

Duncan


Copyright © 2006, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds