Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used to assume that Linux had a huge test suite with hundreds of thousands of tests, given the crazy feature matrix that the kernel has to support.

And that they would have started requiring tests for new features and bug fixes.

Alas, that's not the case. There are a few external test suites like the Linux Test Project [1], but nothing that looks very extensive.

The process seems to mostly rely on maintainers giving patches a good look, plus developers and first line users, including hardware manufacturers and corps like IBM or Red Hat, running their own software on kernel pre-releases and reporting the bugs.

I realize that most of Linux is device drivers or code that is tightly coupled to hardware, which is more or less impossible to test without a huge test farm and lot's of manual labor, but it's still surprising to me that they don't do it for the more or less device independent functionality.

This particular bug looks like it might have been caught by a relatively straight-forward swap file test case.

[1] https://github.com/linux-test-project/ltp



Comment rom "Helping Out with LTS Kernel Releases"[0]:

"I find it amusing that the answer to "how can I help test" is "that people test the -rc releases when I announce them, and let me know if they work or not for their systems/workloads/tests/whatever". No smoke tests, no automated tools, no suggested workloads, no fuzzing, just "fool around and report back". There probably are lots of kernel testing tools around, but you wouldn't learn that from the linked article."

It seems to be conscious choice, to the point of developers stating that "the process works". Which, looking at the results, it does seem to work. But wouldn't it work better with more tests, easily run by users wanting to help? I mean, if a downstream patch in related code regresses the issue, is there a new test added with this fix that would catch that?

Edited to add: it seems that the situation is changing for the better, with gregkh saying in a comment: "`make kselftest` seems to do what you want today, if there are any gaps the kernel developers are glad to take more tests."

[0] https://news.ycombinator.com/item?id=26021962

[1] https://lwn.net/Articles/848352/


I suspect lots of downstreams kernel developers do run smoke tests, automated tools, workloads and fuzzing.

It's just done individually, with each person hitting their own niche, and without a single all-powerful CI pipeline that's visible.


Several people booting their favourite distro with the new kernel stresses a lot of areas at once.

Yes, it's a manual test, but it "does the job" (or 99% of it at least)


It would do an even better job if the user could easily increase coverage or depth of that stressing and check against known issues (regressions). Maybe `make kselftest` is the way to go. But it might be worth creating an easy way for people to help test the kernel, something like Debian popcon meets CPU-Z benchmarks.


It is important to remember that this is the first release candidate of a series which is usually 7 or 8 entries strong. It's pretty far removed from shipping. This is a testing procedure, though not one which looks the way you're used to. In general, Linux ships pretty reliable releases once the rc process is completed.


To be honest, many codebases that I've worked on with large automated test suites seemed to have approximately the same relative occurrence of bugs as codebases with zero automated tests.


Automated testing is good at catching (especially immediate) regressions, but also it is a development tool and a communication tool.


This is the checks & balances of the many eyes looking at code and eating your own dog food. These method succeed when you have a torrent of users but fail at smaller scale more often.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: