the fallacy of the "focus only on using POSIX APIs" crowd is that the majority of useful non-POSIX primitives, like epoll, kqueue, io_uring, etcetera were developed to solve real technical problems which were caused by the POSIX APIs not being scalable
-
@wyatt8740 though if you want to know my opinion on posix, well, i think it’s largely pointless anymore. open source has largely replaced the need for centralized standards, as the BSDs and Linux communities can just borrow good interfaces from each other in real time.
to talk about kqueue again for example, although the BSDs have not embraced epoll yet as a replacement, FreeBSD have embraced things like eventfd which will ultimately take them down a path where a BSD epoll implementation shows up as a weak abstraction around kqueue. i give it about 5 years, probably sooner if FreeBSD picks up uring.
POSIX is a creature of another time, a construct created largely to enable UNIX workstation vendors of the 80s to sell their workstations to the US government while keeping PCs out of that lucrative market (as conformance to the standard was a requirement in government IT buying decisions).
from an educational point of view, somebody could and should take the common APIs between modern Linux and BSDs and release that as a book that they could call the “modern Unix-ish programming interface” or something, but Linux APIs are mostly eating the world everywhere else due to their composability. even Windows is adopting uring for example…
-
@ariadne XENIX was literally a thing back then that ran on PC's, so I'm not sure I buy your argument.
-
@ariadne also windows embodies embrace extend extinguish so of course it "adopts" uring.
-
@wyatt8740 in 1988, Xenix was targeting 286s at a time where Sun, HP and SGI had machines that were much more capable based on Motorola 68k series chips. The Open Group (the standards consortium which maintains POSIX for the IEEE) was founded by these same computer manufacturers to promote Unix as a response to the rising threat of PCs.
PCs did not really catch up in terms of capability to the unix workstations until Pentium came out with an FPU that was performance competitive to the FPUs that were in the workstations.
ultimately it was a combination of cheap pentium-based hardware and NT with its POSIX compatibility layer (though Linux was a contender there for sure too… BSD wasn’t because USL vs. BSDi was happening which took BSD out of play) that toppled the Unix workstation market.
the commercial interest in POSIX was intended to keep NT and OS/2 out of the government market, but as previously stated Microsoft and IBM wrote compatibility layers, and NT eventually wound up winning there because it supported MLS out of the box, which nobody else did until the NSA did SELinux years later.
-
@wyatt8740 i don’t think that argument makes any sense when it comes to how windows presents its own APIs to developers
-
@ariadne knew all of that except the MLS bit.
But the point is that the government "requirement" for posix compatibility was met by xenix and later the posix subsystem for NT. -
@wyatt8740 xenix on the hardware available for it at the time was not competitive to what was already on the market.
it took until 386 to get basic memory management that was competitive to the unix workstations of the time (and was still crude compared to the 68k mmu), and it took until pentium for Intel to make a FPU that was competitive.
POSIX wasn’t just keeping x86 PCs out though, there was greater concern about the 32-bit home PCs being competitive. And both Apple and Commodore did try to break into the Unix workstation market with A/UX and Amiga Unix respectively, which were both effectively obsolete when they launched. POSIX was about keeping the goalpost moving.
and then there was also NeXT, and NeXT never particularly cared about POSIX either… instead focusing on the exclusive applications that were available on their platform.
-
@wyatt8740 like i cannot stress enough: Xenix was really slow. even when SCO bought it, rewrote its kernel to use 386 features and turned it into the product which is now known as OpenServer, it was still slow.
performance of Xenix/OpenServer on the same hardware verses 386BSD, BSDi and even Unixware (not yet owned by SCO until 2000) was at a significant disadvantage.
-
@erincandescent @ariadne Safe to dlclose is not a property of the library, it’s a property of the program. Anything that keeps a function or global pointer and doesn’t realise it can become dangling is a potential security vulnerability.
It is possible to write a correct program that uses dlclose. You can use it for Erlang-style hot code replacement, for example, as long as you control everything that gets a pointer to code in the library and have a synchronisation point to replace those pointers. It’s sufficiently hard that you’re probably better off not relying on generic functionality from the loader if you want it.
-
@david_chisnall @ariadne it's a property of both. The library can e.g. spawn a thread, and unless it can ref itself then it's not safe to dlclose it any more (especially since POSIX has no equal to FreeLibraryAndExitThread)
But obviously the app calling dlopen can be trusted as much to call dlclose only when it's ready to do so. At least as much as it can be trusted with any open/close pair, anyway.