• uis@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 months ago

    This particular issue could be solved in most cases in a monolithic kernel. That it isn’t, is by design.

    It was(see CLONE_DETATCHED here) and is(source)

    Create a program that is guaranteed to become a zombie. Run it within a filesystem mounted by an in-kernel module, like a remote nfs mount. You now have a permanently mounted NFS mount point.

    Ok, this is not really good implementation. I’m not sure that standard requires zombie processes to keep mountpoints(unless its executable is located in that fs) untill return value is read. Unless there is call to get CWD of another process. Oh, wait. Can’t ptrace issue syscall on behalf of zombie process or like that? Or use vfs of that process? If so, then it makes sense to keep mountpoint.

    Every module is killable, crashable, upgradable - all without forcing a reboot or affecting any processes not using the module.

    except without the benefits of actually being a microkernel.

    Except Linux does it too. If graphics module crashes, I still can SSH into system. And when I developed driver for RK3328 TRNG, it crashed a lot. Replaced it without reboot.

    Microkernels are better. Popularity does not prove superiority, except in the metric of popularity.

    As I said, we live in post-meltdown world. Microkernels are MUCH slower.

    • As I said, we live in post-meltdown world. Microkernels are MUCH slower.

      I’ve heard this from several people, but you’re the lucky number by which I’d heard it enough that I bothered to gather some references to refute this.

      First, this is an argument that derived from first generation microkernels, and in particular, MINIX, which - as a teaching aid OS, never tried to play the benchmark game. It’s been repeated, like dogma, through several iterations of microkernels which have, in the interim, largely erased most of those performance leads of monolithic kernels. One paper notes that, once the working code exceeds the L2 cache size, there is marginal advantage to the monolithic structure. A second paper running benchmarks on L4Linux vs Linux concluded that the microkernel penalty was only about 5%-10% slower for applications than the Linux monolithic kernel.

      This is not MUCH slower, and - indeed - unless you’re doing HPC applications, is close enough to be unnoticeable.

      Edit: I was originally going to omit this, as it’s propaganda from a vested interest, and includes no concrete numbers, but this blog entry from a product manager at QNX specifically mentions using microkernels in HPC problem spaces, which I thought was interesting, so I’m post-facto including it.