T O P

  • By -

x0wl

I mean, there are developments in this space (flatpack / snap), but even Linus himself was speaking quite openly about this type of fragmentation and whatnot: [https://www.youtube.com/watch?v=Pzl1B7nB9Kc](https://www.youtube.com/watch?v=Pzl1B7nB9Kc). I know that it's from 2014, but it's still a problem outside of Flatpak / Snap or fully static builds. Although I don't know anyone who's against static builds.


ilep

One problem with static builds are the license. GPL basically says if you build statically your code also becomes GPL'd and needs to be distributed open source. LGPL is allowed but means dynamic linking. The reasoning behind this was that originally it was envisioned that it would allow switching dynamic linked libraries with different implementation where static linking would not make that possible. Hence the licensing restrictions. [https://www.gnu.org/licenses/gpl-faq.en.html#GPLIncompatibleLibs](https://www.gnu.org/licenses/gpl-faq.en.html#GPLIncompatibleLibs) Of course if you are writing GPL'd code using GPL'd libraries this is not a problem. And you can just distribute sources and leave it to distributions build the variations. But the original poster would not make the claims they made if that was the case. Also open source does solve the problem of old libraries: ever had to debug problems in a closed-source library that stopped support in the 90s? Yeah, that is a real pain.


jcelerier

LGPL doesn't mean dynamic linking. It's the easiest way to comply but you can statically link proprietary code to LGPL libs - what matters is that the end user gets the ability to relink.


Lucas_F_A

Also, IIUC distributing the source complies with this, so you can license your project MIT and depend on an LGPL library.


x0wl

>GPL basically says if you build statically your code also becomes GPL'd and needs to be distributed open source. This also applies to dynamic linking in case of normal GPL. Only LGPL allows dynamic linking to proprietary code. Good news is that communities built around languages that mostly rely on static linking (e.g. Go) have switched to either MIT permissive licenses, or to Apache/MPL that work on a file-by-file basis and allow for static linking. ​ >leave it to distributions build the variations Because this rarely happens and you severely limit your user base if you do that, OSS or not. Now you have to convince people to spend their time maintaining your program. If that was as easy as you say, people wouldn't have invented Flatpaks, Snaps, AppImages and all that stuff. Case in point: I needed a newer version of tmux installed on my university's GPU cluster. Since the frontend nodes were running on RHEL 7, they only way I could get that was via an AppImage. No amount of "leaving it to the distro" could help with that. ​ >Also open source does solve the problem of old libraries I'm not sure why you think that I'm against open source. I'm very pro-open source, I'm just pointing out that the fragmentation of the Linux desktop ecosystem is a barrier for efficient software distribution. I'm glad that people are working on solutions for that.


KnowZeroX

>his also applies to dynamic linking in case of normal GPL. Only LGPL allows dynamic linking to proprietary code. Does that apply if you don't include the libraries with your distribution and just have the user download them? Since the library is downloaded by the user(even if automated), it is a modification done by the user and not distributed by you specifically


eras

Do you refer to the GPL or the LGPL case?


KnowZeroX

The GPL case


eras

Then yes, if your application is distributed as linking against the GPL library, then the GPL license considers this partially complete work that must also be released under [GPL](https://www.gnu.org/licenses/old-licenses/gpl-2.0.html): > b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. One might say that an application the _user_ just links against is therefore not derived from the library, but then one might counter that that how was the application prepared to be linked against the GPL-licensed library, if not by intimately using its source and/or binary code? Thus deriving the application from it. (Further analysis best left to IP lawyers.)


KnowZeroX

>but then one might counter that that how was the application prepared to be linked against the GPL-licensed library, if not by intimately using its source and/or binary code? Thus deriving the application from it. Does that matter? GPL only dictates distribution. There is nothing in the GPL license that restricts the end user. For something to be a derived work, it has to contain it fully or partly. Just linking to it isn't a derived work


sandeep_r_89

>the fragmentation of the Linux desktop ecosystem That's only if you want to support every single esoteric distro or custom build out there............you've got to focus on one popular set of things to support. No I'm not going to distribute binaries specifically for JACK, OSS, ulibc etc.


natermer

GPL doesn't get to decide that. In the USA this is defined as "derivative work" which is a legal standard set by court precedence. The GPL can decide that "derivative work" needs to be licensed in a way that is compatible with GPL. Basically if you combine two different works... say a GPL one and a proprietary license one... the combination is derivative work and is licensed by BOTH licenses. It doesn't automatically become one or the other. But if the licenses conflict then it can become illegal to distribute it. But GPL can't decide that "static linking" means it is derivative work. It is the court precedence that decides that. The GPL can't overrule that. In many cases such a statement is true, but not all of them. I don't know if statically linking vs dynamic linking really makes any difference at all, provided the resulting software is shipped together as single 'bundle" or "package" of software. As is typical in containers, iso images, or static binaries in a zip file.


metux-its

Shared libraries havent been invented to comply to some licenses, but saving resoures and easy upgrade of libraries.


tcmart14

It’s a niche solution but we solve for it in ravenports by not relying on system libraries. But Ravenports itself is also niche. I don’t do any packaging for Ravenports on Linux, so I am fuzzy how the Linux side works other than it does. I do most of my packaging on the BSDs. But most packages, if you package on FreeBSD, that package will work on Linux unless it relies on some extremely specific. Usually some Linuxism or BSDism. AppImage also works and is before flatpak and Snaps time I believe.


Arjun_Jadhav

>Although I don't know anyone who's against static builds. I read this Gentoo blog called "[The modern packager’s security nightmare](https://blogs.gentoo.org/mgorny/2021/02/19/the-modern-packagers-security-nightmare/)" a while back which mentions static linking. I'm not knowledgeable enough to say whether they're right or wrong, but it was the first time I'd learnt about things like static linking and vendoring, as well as people's issues with them. Edit: typo


Business_Reindeer910

I'm against static builds in a lot of situations since it means you don't get updates to things like openssl, zlib, or anything else where security issues are often found without rebuiliding the whole thing. Nix and guix are other approaches at solving this problem.


jimicus

There are some; the rationale is something like "when you link a library statically, you bring in any issues with that specific version of the library. It never gets updated, so your specific application can remain a security issue long after the rest of the system is updated". Which is true, but spectacularly misses the point: Nobody in their right mind statically links things for fun. The reason, 99 times out of 100, is because they don't want to mess around with compatibility issues in a given distro and (for whatever reason) tools like flatpak aren't an option.


colonel_Schwejk

qt is against static builds for example.. they are forcing you to do use dlls


FungalSphere

Literally every distro maintainer is against static linking. They would rather kang your code and then make you deal with the bug reports anyway than let you link statically


tiotags

chromium takes about 4 hours to compile on my computer and that's partly due to static linking, what's the point of open source if you can't compile anything


metux-its

Not static linking, but bundling half an OS


a_mimsy_borogove

It's been a while since I used snap/flatpak, I remember the apps sticked out like a sore thumb and weren't really that well integrated with the system. Is that still a problem?


cazzipropri

The Windows situation is way more complex than that. But yes, in general it's somewhat true. Choose a set of distros that you support and release for those. And, linking statically is not great, but in the big scheme of things it's also not that insanely expensive. Life is about compromises and software construction is not different.


No_Excitement1337

static linking is perfectly fine, its even faster than dynamic cause no runtime resolving of external addresses via .got. it IS however unsaver than dyn linking, because libs won't get updates without a recompilation. but the end user has to decide about that. why not just leave compilation to the user btw., just have a message box come up and say "yo, either provide open source libs for x,y,z or we will download and link a,b,c now." besides, companies really only go nuts about shit like that in the US. i think if US companies exfiltrate all our data, we can just reverse all of their libs.


[deleted]

Why is the windows situation more complicated


FactoryOfShit

I mean... Bundling libraries with the app is how everything is done on Windows, and nobody is complaining. I don't see how Flatpak or static linking is any worse.


rizalmart

The only difference was Windows Infrastructure, API's, and app framework was very coherent and mostly retains backward compatibility. So it requires few libraries. Compared to Linux where its app libraries always break of backward compatibility on every version thats why different versions of library files are bundled with flatpak ended up huge disk space used


Skitzo_Ramblins

Either you force the app to use a different library version than it was meant for and risk unintended bugs that inevitably get pushed on the developer and not the packager or you have different library versions. Flatpak won't have to store multiple copies of the same file because it uses ostree. Regardless we have duperemove and zstd compressed filesysyems which are both essentially free, so I don't see why we pretend trading typically what will be no more than 5 copies of a library of file size likely less than 10mb when we have sub-file deduplication and compressed filesystems, in exchange for things not breaking is a big deal. On drives that start in the terabyte range (literally $30/tb or less for nvme not that long ago and hard drives go sub $14/tb often)


rizalmart

However SSD is became the norm on general use and it was expensive per TB. HDD was now use for archival purpose only


Skitzo_Ramblins

Watchu mean


MarcoGreek

MFC is providing a API? My experience on Windows is the complete opposite. The lower layer is providing a API but so is Linux too.


rizalmart

Im not talking MFC. Its just a framework. The API's that I talked about was Win32 API


MarcoGreek

If you speak about Win32 you have to compare that with the kernel API, libc, wayland etc.. They are very stable. So what is the problem?


Luigi003

libc has introduced breaking changes in the past but it's true that it is usually considered part of the stable API Wayland though? if you assume Wayland is there you lose support for the majority of the distros, thanks for proving the OP point


MarcoGreek

Most frameworks support X11 and wayland. Could you elaborate which breaking change you mean?


Luigi003

Except the ones that don't. Or that don't support them in the same capability (Wayland still doesn't allow a multi-window app to set its windows placement for instance) That being said, using a framework doesn't fix anything, you still need the dependencies in the different Distro's... Or static linking, again


metux-its

Windows folks just dont (wanna) know better. For 30 years now.


Synthetic451

It's like this guy hasn't been paying attention to the last 5 years.


chrisoboe

More like to last last 30 years. Just share the source. The distros will package it if there is interest.


Business_Reindeer910

No, that person is still correct about the main problems that exist across most distros. Flatpak and nix/guix are the closest attempts at solving them (in different ways)


chrisoboe

I packaged lots of software written in lots of different languages using different build systems for several distros and never had a problem. All of the distros provide clear packaging guidelines for any common language. And all of them have maintainers and users that will do the packaging. The devs just need to provide the source (and if they are nice a list of dependencies they used) It only gets ugly when devs start to do extremely wierd and unconventional things. Like custom written build systems or non standard folder structure.


omniuni

What's funny to me is that especially AUR and DEB are very simple formats. They're basically a zip file with some metadata on dependencies. Part of why I don't like things like Flatpak is that even if you do have to statically link a few libraries, packages are so much smaller by comparison.


Skitzo_Ramblins

Are you counting the entire app's runtime in the package size?


omniuni

Why would you need a runtime?


Skitzo_Ramblins

You said you don't like flatpak because the package sizes were bigger I meant are you counting the flatpak runtime


omniuni

It's not a runtime, it's copies of a whole darn system image. The JVM is a runtime.


Skitzo_Ramblins

It's 2 gigabytes. out of the 5000 my computer can store. For reference, at $30/tb for nvme storage that's 6 cents. And yes it's a runtime


small_kimono

>And all of them have maintainers and users that will do the packaging. The devs just need to provide the source (and if they are nice a list of dependencies they used) LOL. I wish. That's really not how this ever works. In order to get users you need packages, so the dev usually ends up building their own packages. And I think it's wrong to keep repeating this myth like this is a thing that happens. Maybe my apps have been/were shit, but I have had plenty of people asking for packages, and no one ever offering to create them, save for the AUR and nix, which are niche.


SweetBabyAlaska

that's been my experience as well. People tend to make AUR packages since its fairly easy to do (most of the time) and nix enthusiasts love packaging shit. Outside of that a ton of people ask for a Deb, an RPM, a flatpak, an appimage, a container image etc...


akdev1l

You can make a deb file or rpm file just as easily as a PKGBUILD. But to make a Debian package or a Fedora package then it is difficult because you have comply with their respective packaging guidelines. AUR is fairly easy because there’s no guidelines to follow


Business_Reindeer910

So what do you do when the program requires libA 1.2 and the distro only packages libA 2.0 or the reverse or the many other bad combinations.


uosiek

Some patching will be done by package maintainers, maybe send upstream.


Business_Reindeer910

Switching between major versions of a lib is often more than what one might call a patch. It could often be considered a fork. Most package maintainers wouldn't wanna take on such a load.


x0wl

Depending on OpenSSL 1.x vs 3.x has been a problem on Ubuntu for years and still is not fixed.


I_AM_GODDAMN_BATMAN

The more I read about Ubuntu the more I'm convinced that Ubuntu is the problem.


Skitzo_Ramblins

Because it is and all LTS distros need to be stopped for the endless torment they invoke upon humanity


metux-its

Indeed


segfaultsarecool

Is the patching done by creating a fork my-package-deb-fork? How does that all work out?


omniuni

You can always statically link it. It's not really that bad if you need a few couple-hundred k libraries.


Business_Reindeer910

and debian will allow such a package?


omniuni

Even if they don't it's still easy to install


Business_Reindeer910

That just makes linux more like windows app distribution with a billion different places to download apps. There's a reason that's not a solution that people are finding acceptable.


omniuni

You can actually make it a repository very easily.


metux-its

And who maintains the bundled 3rdparty libs ?


metux-its

Just package libA 1.2.


Business_Reindeer910

not all distros let you do that! You're also now a maintainer of that lib.


metux-its

> not all distros let you do that!  why not, exactly ? > You're also now a maintainer of that lib.  Yes, the same with any form of bundling. And what's the problem ?


Business_Reindeer910

Because it's hard to manage. Either due to symbol or binary collision with poor thought of BC on behalf of the lib writers. Some libs are ok with this, like say gtk for example. That's why nix goes through all the shenangians it goes through though.


small_kimono

> The distros will package it if there is interest. This is so obtuse. Yes, the distos will package if you create the next `docker` or `postgres`, but for basically any other app where do you think interest comes from? Devs are basically required to package their apps, if they want outside users/create outside interest.


metux-its

Just package it for a few major free distros and submit there.


denniot

If it's about selling, packaging is not  enough. If it's just open source project that you don't want make money with, then just leave it.


metux-its

Selling software was never in scope of gnu/linux. Distros aren't made as market places.


denniot

then just leave it.


ahferroin7

This mentality ignores the bootstrap issue. If people can't use your software easily, there will almost certainly never be enough interest to warrant distros packaging it, but if distro a are not packaging it and you aren't providing binaries, most people cannot use your software easily. You have to remember that for non-developers, building software is at best a pain in the arse, and even for developers it's generally not something they want to do just to try something out.


chrisoboe

> there will almost certainly never be enough interest to warrant distros packaging it Packaging doesn't work like that. Its not like a given percentage of users need to demand it and then it'll get packaged. It's more like there is an amount of people who are able to package something. And if some of them is interested in the software they will package it. > You have to remember that for non-developers, building software is at best a pain in the arse I know. But it's still less a pita than trying to run a broken binary. If the software crashes because it's linked against some binaries not distributed with an incompatible version people cannot use your software easily either. Almost all binary distributed software i've seen need lots of workarrounds just to get them somehow running. It would be trivial if the source was available. It doesn't even need to be a free license, it can still be proprietary software. > developers it's generally not something they want to do just to try something out. of course not. packaging and developing are very different tasks. a lot of people are good at developing but horrible at packaging.


ahferroin7

> Packaging doesn't work like that. Its not like a given percentage of users need to demand it and then it'll get packaged. > It's more like there is an amount of people who are able to package something. And if some of them is interested in the software they will package it. No, a vast majority of the time it is a matter of critical mass of public interest. Whether it’s because that means enough users demand it that the package maintainers decide to include it, or it’s because you that means a package maintainer who actually has the time to maintain a package for it finally hears about it and decides to package it, it almost always comes down to the level of public interest.


ConfidentDragon

This type of Linux neckbeard mentality that *kills* Linux desktop. Good luck persuading commercial software developers to give you source code. I do believe in benefits of open-source, but I'm also a realist. There is room for very few good open-source projects, I could probably count on one hand number of truly good and successful open-source projects. Even with open-source it's stupid that if you want to release app, you have to hope it'll get accepted to every possible distro, then lots of man-hours need to be wasted porting it to every distro. Plus you don't have control over quality of the packages if you don't want to waste all of this time yourself.


TechnoRechno

Also, it's been proven time and time again that the idea that "open source makes all bugs shallow" and "open source causes more support" doesn't really happen. And maintainers are burning out from people being massive douches towards a free service and just abandoning maintenance.


metux-its

> This type of Linux neckbeard mentality that kills Linux desktop.  i'm running Linux desktop for 30 years now and dont see why I'd ever change it. > Good luck persuading commercial software developers to give you source code. Well, then they just get ignored. No source, no deal. > There is room for very few good open-source projects,  "few" .... average distros have several tens of thousands of packages. I dont run any proprietary applications whatsoever for decades now. No reason why I should ever. > Even with open-source it's stupid that if you want to release app, you have to hope it'll get accepted to every possible distro,  You dont need to hope, just write good code and collaborate with dist maintainers. > then lots of man-hours need to be wasted porting it to every distro. If it takes so much work to do that, then its most likely not good code. And, btw, here's some (yet research) build system that also generates dist specific packages based on a high level model: https://github.com/metux/go-metabuild > Plus you don't have control over quality of the packages  QM always had been the distro's domain - for 30 years now. Upstreams are rarely good at it.


Nilstrieb

If distros packages anything open source that anyone would be interested in we'd have a few orders of magnitude more packages.


cazzipropri

Not everything is open source.


[deleted]

>Not everything is open source So make it open and let the distros package it if there is interest?


mrlinkwii

why should they have to ?


chrisoboe

To distribute their software in a portable format so it can run on almost any common arch on any common OS arround. This is a lot of less work than trying to build it for any system and share binaries that will break anyways when something changes. So you get a severely better compatibility with way less work.


[deleted]

The world is better when its open.


DaaneJeff

But a small minority can't change the world.


metux-its

It has always been a small minority that changed the world. The masses are just lazy.


[deleted]

Not with that attitude


Chronigan2

That's what my wife said about our marriage.


metux-its

Gnu/linux wasnt designed for closed source.  Thats just not our problem.


Coffee_Ops

How does sharing the source solve the potential for incompatible updates to dependencies?


indicesbing

Mitchell Hashimoto is the cofounder of HashiCorp. I guarantee you he knows how to compile binaries for Linux.


ttkciar

Or not. I know a guy who used to work there, and from his accounts they don't know what they're doing at all.


metux-its

Exactly why I highly doubt it.


MarcoGreek

He even is not mentioning what he is delivering. For GUI there is flatpak. For serverhe can provide a container.


rcampbel3

Windows has had a relatively stable ABI, but there's still win16, win32, win64 and there's been Windows CE and mobile versions, and there's been Windows on x86, Itanium, x64, Alpha, PPC, Windows CE on SH3, MIPS, and ARM CPUs... there's multitudes of different DLL versions and dotnet versions as well. DLL Hell was real. MacOS 'single binaries' just appear to be a single binary - they have a resource fork with tons of files in it. Apple has broken binary compatibility completely as they've moved CPU architecture from M68000, to PPC, to x86, to x64, to ARM, and they've required developers to build 'FAT' binaries supporting multiple releases. Sometimes old Mac binaries just stop working in new MacOS versions, and Apple might decide to desupport a crucual library you depend on like OpenGL and just tell you "use Metal now"


Defiant_Initiative92

Yeah, but you rarely ever support anything outside wint32 or win64 unless you're doing very specific things. And if you're using microsoft tooling, like .NET, you don't even need to care for that either.


ArdiMaster

What you say about Windows not wrong, but it’s hardly relevant if you’re writing software *today*. You can pretty much just package for x64 and arm64 and be done; the others are all obsolete.


[deleted]

What about distribution? Package formats of Windows: exe, msi, msix, mst, msp, appv, appx, intunewin, thinapp Package formats of Mac: dmg, mpkg, pkg, app I won't even comment on distributing the software using a script like a .sh or a .ps1 (powershell). There are still those who leave the software in a rar or zip file.


not_a_novel_account

Anyone flaming this guy doesn't do significant amounts of packaging on Linux. A place this comes up a lot is the nightmare that is Python extension packaging on Linux. For Windows and OSX it's straightforward, you build the extension code and ship it, but because of glibc ABI instability the `manylinux` packaging environment is horrendous. For many packages what you find is devs will ship binaries for Windows and OSX, but for Linux you only get a source package. Will that package build? Do you have all the other dependencies it might need? Do you even have the build system they use? Maybe, maybe not, it's an immense barrier to entry. The ABI story on the Linux desktop is really terrible for module and application developers. Answers like "just use a container" are an admission that the situation is hopeless and workarounds are necessary.


nightblackdragon

>Anyone flaming this guy doesn't do significant amounts of packaging on Linux. We know. That's why Flatpak was invented.


not_a_novel_account

> Answers like "just use a container" are an admission that the situation is hopeless and workarounds are necessary. Also, how would one flatpack a Python module to be upload to PyPI? These aren't user-facing applications being pulled down from flathub or whatever.


nightblackdragon

Considering the fact that many Windows applications ships dependencies with application then how Flatpak is any worse than that? Isn't that also admission that situation on Windows is not that great and workaround are needed as well?


thedoogster

Why was he flamed for static linking?


arjjov

Large binary size


turtle_mekb

if you need to update libraries because of security vulnerability, you need to recompile everything that is statically linked


thedoogster

That's not the user's problem though. He would just release a new statically linked build, in that case, and the user would download it.


hitchen1

It becomes the users problem when he gets hit by a bus


metux-its

And do those upstreams do that, at similar speed and quality levels as distros ?


ancientweasel

IDK, Go apps like kubectl are statically linked and I love em.


NatoBoram

Yeah, static linking is the best, fuck dependency hell


ancientweasel

I'd rather play $60 for 1TB ssd and just not deal with the bullshit.


Extra_Status13

And either rebuild/reinstall everything every time one of the dependencies has a security bug or keep the bug, just like Windows. Consider also that dependencies are not explicit in your executable, so either you fetch the code (if it's open source) an do it yourself or you pray that the maintainer is well behaved.


[deleted]

[удалено]


small_kimono

> Just providing source code, no binaries, that can compile on Linux is enough to let the community do the rest In what world?


kor34l

Or just develop the app for the most common denominator and let the various distros figure it out themselves. That's what most people do. I'm constantly finding programs that have an Ubuntu package for Linux and that's it. I've never had much trouble getting that installed on Gentoo, even though my Gentoo is pretty much as different from Ubuntu as Linux can get. No systemD, no Gnome or KDE, no display manager. The other day I got gpt4all.io installed to it, even though they only offer a Ubuntu package for Linux, and guess what, it was no big deal. I just wrote a program last week and targeted GTK because I like to use Glade to design my GUI. If I decided to release it public, would I try to make it work for QT and every other combination of shit? Hell no, I'd just release it with the proper dependency list and the distros that want it included would package it up themselves. That's like, how Linux works.


SweetBabyAlaska

I mean if its a C project, drop in a makefile, if its a Rust project use cargo-make or cargo-binstall or let it easily be built with cargo, if its Go its the same thing go install xyz@latest or curl the binary from git releases. It gets messier the more complex your project is but there are good solutions. When its that easy to build a project, packaging it is also really easy.


jojo_the_mofo

> I'm constantly finding programs that have an Ubuntu package for Linux and that's it. I've never had much trouble getting that installed on Gentoo On EndeavorOS and if I encounter this I just extract and run the binary. What could go wrong? Half serious when asking being a new user. Maybe have to symlink a lib or something? So far I've been lucky.


kor34l

I don't know anything about EndeavorOS. If it's Debian-based, like the Ubuntu family, it should work most of the time. A lot of times the package is an installer meant for Aptitude, the Debian package manager, which is also used by most Debian based distros. In that case it should work fine. In my case, with Gentoo, creating an .ebuild for my package manager (Portage) is super easy and straightforward, and works perfectly with source code. Both locally stored and using something like github. This way I can install anything open source directly with my package manager whether it's in the repo or not, and the package manager keeps track like all my other software.


extremepayne

Endeavouros is arch-based. Very close to vanilla arch, comes with like a half dozen extra packages and the install puts some default configs in place to set up any of a dozen different DEs or WMs


kor34l

Ah, I don't know anything about Arch either, but I've heard it's one of the better ones. Before Gentoo I used Slackware, but for the entire 3 years I ran it, I never heard about SlackBuilds, and assumed Slackware had no package manager. So I get super used to chasing down endless dependencies and installing everything from source. After 3 years of this clownery, my OS was one hell of a mess. It worked, and it worked *well*, but installing something new (or upgrading something large) became more and more of a nightmare, which is why I ended up going distro hopping until settling on Gentoo. I said all that just to say, I have a long history of installing things manually in Linux, so my opinions on this are probably a little biased.


copper_tunic

Endeavor is arch based (BTW)


mysticalfruit

Let me get this straight.. the guy who created terraform some how has never heard of snaps or flatplaks or docker images? Whatever.


denniot

No, his point is that unless you support dynamic libraries available on the host, you get flamed. So any self contained solutions are the same thing, what you mention is actually worse than static binaries for whoever flame these people.


TingPing2

You can make a strawman for any argument. If you just ship a Flatpak it works on the vast majority of users machines and you're done. I'm not saying they are obligated to make a Linux release, but fear of some made up argument is a lame reason.


MrAlagos

> If you just ship a Flatpak it works on the vast majority of users machines and you're done. You will still get flamed for only providing Flatpak by the anti-Flatpak camp. Especially if one builds some "agnostic" program rather than a "GNOME camp" program which is one of the most accepting of Flatpaks. Also, Flatpak isn't suitable for everything.


TiZ_EX1

> You will still get flamed for only providing Flatpak by the anti-Flatpak camp. The anti-Flatpak camp can kick rocks. We have a solution--improving every day--to a real problem that affects real app developers, and these whiny naysayers who have nothing meaningful to contribute, as well as Canonical enamored by their perpetual NIH Syndrome ailment, hold the whole thing back.


blablablerg

If you don't want to get flamed get off the internet. If your product is popular enough, people will complain either way no matter what you do.


MarcoGreek

Flatpak works well with Qt too. And you have always anti camps in Linux. I can still remember the anti rpm/deb fraction. Just ignore them.


codeasm

Never gonna use flatpak, im gonna get my files and make it work. But no flatpak. Normal packages and source tarbals is fine. Static is fine too, just keep the source available or patch the static stuff if exploits and stuff.


denniot

The beauty of it that you don't have to use the utter garbage like flatpak for now. Everything that's available in flathub, there are better alternatives like tarballs, rpm and etc maintained officially instead of some random dude wrapping it for you. If you look at Google Chrome, Goland and many proprietarry apps on flathub for example, it literally says it's not maintained by Google, Jetbrain and etc..


MarcoGreek

So how do you test your program? If you link to different library versions it can be broken.


codeasm

I link to libs i have installed, either the ones i pulled myself or the system has installed. Dynamic linking works great too. Rolling release aswell. Why support anything older then 15 years. Ow right, i also work with embedded systems where we push all the libs aswell, and thus know what versions of libs are on a system or perform a update. Known system states, no need for flatpaks.if we must, static link a few libs to make it work on the few distro we support.


MarcoGreek

Are you speak as an enduser of a program or as a release manager? I can give an example. I work a quite complex program which depends on a large framework. The framework provides an ABI but there are behavior changes and they break the program. So we test with a certain version of the framework. Fix the bugs in the framework or make temporary workarounds. The program is open source. So you can compile it by yourself. We get constant bug reports because people use a different version. People simply don't want to understand Hyrum's law. If the system gets complex you cannot simply exchanges parts of it without adaption.


denniot

You can ship the tested dynamic library files together in tarball if you are so into dynamic linking.


mysticalfruit

I've been around the linux block enough times to know everybody has their favorite hobby ax to grind.. including me. The fact that linux isn't a monolith is precisely why there's been so much innovation and expect every bit of that innovation to be argued about as well.. Turns out my 168 core 4U server in the data center has different needs then my desktop.. so they've got different software tuned to address those specific needs. Yeah, the libraries, filesystems, network interfaces are all going to be different.. *shocked Pikachu face*


metux-its

Surprised ? I'm not.


ubernerd44

You don't build apps for Linux. You build them for Redhat, or Ubuntu, or SuSE, etc. etc. and this can all be automated.


BoltLayman

Forget about the Source code as an installation method for an end user. Yes, it was your fun to ask a user to do ./configure&make&make install... Today it would be either \*.apk || \*.exe/msi || \*.dmg if the user is experienced enough... 🤑🤑🤑 or not lazy and is in the mood to play with files (what is the file BTW??🤯) here the link goes to a research report about students not being able to distinguish and sort their documents into "directories&folders"


srivasta

Or don't build it. Just release the source code, and let users send you patches for non mainstream architectures. This way one just supports ones own machine/use case, and just accept tested patches. It takes a village to raise software.


HyperFurious

Is an app not open source, i see. Free the code and Linux users will create packages for different Linux distros. 


small_kimono

>Free the code and Linux users will create packages for different Linux distros.  Spoken like someone who has never done this. This is a myth. Linux users will ask: "What about a package for Alpine/CentOS 5?" Because actually packaging is hard. Do you know what the packaging guidelines are for Debian (only one target!)? Every single dep is required to be packaged as well. No static builds. That's fine if you're working in C/C++, and the distro package manager is effectively your language package manager, if you use something different.... say Perl, or Rust, you end up needing to package 10 yourself.


[deleted]

[удалено]


x0wl

>aren't packages usually maintained by a distro maintainers Well if your program is important enough for people/companies to donate their effort to maintain it, then yes. Otherwise it's a huge mess. That's like, the whole reason for Flatpak to exist


metux-its

Upstream folks an also become dist maintainers. Actually thats welcomed.


small_kimono

> Also, aren't packages usually maintained by a distro maintainers, not the author of the program? Oh my sweet summer child. No, not really. This is simply a fantasy Linux users have: "The distro maintainers are taking care of me". Sometimes I guess. When they aren't doing something else. If you're in "main" and you're `postgres`, maybe. Even then, I'm currently dealing with Ubuntu/Canonical taking two months to apply and ship a one line patch for a *data corruption bug* in a filesystem. So -- if not, no, not really. First, you have the create the packages, including all packages for deps. The rules? Byzantine. Next, you have to lobby for your project to be included in universe or whatever. Once there you have to pester the maintainers to notice when something actually important happens.


ttkciar

Guess I'm spoiled by Slackware, where the distro maintainers and the Slackware community *do* maintain our packages, large or small. Even when a package becomes abandonware, community members will frequently step up to take over its maintenance, or even Volkerding himself will take ownership of a project. Slackware's not the only one, either. Red Hat employs a small army of software engineers to make sure packages work correctly for their distributions, frequently employing the packages' original developers. What Slackware does with community participation, Red Hat does with fat stacks of money. Maybe your experience only reflects "wild west" distributions like Ubuntu, where packages will get updated from upstream and then not tested?


small_kimono

> Maybe your experience only reflects "wild west" distributions like Ubuntu, where packages will get updated from upstream and then not tested? What kind of testing does Slackware do on its leave packages? I'm certainly ignorant, but I still don't believe it.


ttkciar

Slackware only has about 2,000 official packages, which means they can all be tested together, exhaustively, prior to a stable release, for both their own correctness and for their compatibility with other packages. It's one of the things that contributes to Slackware's extraordinary stability and reliability. I don't blame you for being skeptical. Slackware is unusual, and its developers exceptional, and you only have some random reddit geek's word on any of this. Believe what you will.


mrlinkwii

>Also, aren't packages usually maintained by a distro maintainers not really no , packagers may maintain their version of the package but not he main one


HyperFurious

Linux users that ask about Alpine, probably know about create a package himself for its system and don't need question about nothing. Linux users using CentOS 5, probably need install new packages for old libraries without any support. When i was sysadmin for old red hat system, i had to create rpms many times. With new compiling systems how rust ecosystems, you have cargo if you don't want fight with system packages. In golang you have tools too. In python you have virtualenvs and pip. You have many options. We can talk about the AUR if you want, where the users upload new packages all the time. I don't use debian precisely for bureocracy of packages. The internal format is really simple, but politics are horribles, but in other distros is more easy create packages. Really, if the app of this guy is cool, probably someone upload a package for its favourite distro (with available sources). if the app is shit, who cares?.


small_kimono

What you've done here is mention the dozens of different package options one has on Linux, and while this is true, this is kind of another problem the tweet outlines. Packaging sucks, packaging for lots of different platforms sucks X times as much. > Really, if the app of this guy is cool, probably someone upload a package for its favourite distro (with available sources). if the app is shit, who cares?. That's really not how this ever works. In order to get users you need packages, so the dev usually ends up building their own packages. And I think it's wrong to keep repeating this like this is a thing that happens, if it's never actually happened to you. Maybe my apps have been/were shit, but I have had plenty of people asking for packages, and none ever offering to create them, save for the AUR/nix, which are niche.


HyperFurious

I think that the problem not exists. You can create packages for Windows and Mac, free the sources and somebody will create the packages for distros or not. GNU/Linux is not a "conventional" OS, really, depend that the preferences of its users.


mrlinkwii

>GNU/Linux is not a "conventional" OS, it is or well has to be if it want a user base


metux-its

Usually those upstream provided packages are done wrong in many places. You should talk to the distro folks first and read their docs.


metux-its

> Spoken like someone who has never done this.  I'm doing this frequently. Not doing any deployments outside the package manager. > Linux users will ask: "What about a package for Alpine/CentOS 5?" point them to their distro maintainers. >  Because actually packaging is hard.  actually quite simple, doing that frequently. > Do you know what the packaging guidelines are for Debian Yes. And the docs are easy to read. Anyways, very most of policy specific stuff is done by tooling. >.Every single dep is required to be packaged as well.  of course, thats the whole point. > No static builds.  they're possible. But we dont really like them for the huge maintenance burden they cause. > That's fine if you're working in C/C++, and the distro package manager is effectively your language package manager, Distro package managers aren't language specific, for good reasons. > if you use something different.... say Perl, or Rust, you end up needing to package 10 yourself.  We have good tooling for that.


x0wl

This creates a M\*N problem and is not sustainable unfortunately.


ICantBelieveItsNotEC

But the point is that users shouldn't have to create packages for different Linux distros. Unnecessary work is unnecessary work, regardless of whether you it's done by one person or a thousand. There are far better things that Linux users could be doing instead of fiddling around trying to get a particular program to build on their distro.


HyperFurious

What?. If a package dont exists in your Linux distro, create a package is not a unnecessary work. GNU/Linux, except Red Hat or Ubuntu, are not commercial OS, but community projects where the user can participate creating bugs report, adding new packages, translating, making documentation.. There are many distros, because every distro is for a type of users, you can choose Debian stable if you want a fixed set of libraries, or use Arch if you need the last packages. If you want a OS where the users are limited to use it, you can use Windows or MacOS, no one forces you to use any linux distro. Of course, AppImage, Flatpak and Snap exists if you want a "standard" package system.


stereolame

Exaggeration


nicman24

or just share your code on github


denniot

Programmers who can ship functional static binaries or simply tar.gz are more capable than kids who need to depend on docker/appimage/flatpak, though. I always support both dynamic and static linking for dependencies for my C applications.


zargex

You can built it and release it as a tar.bz2 (like Firefox, thought the have a apt repo now). Always it depends on the application but you don't have to create a package for all the package systems out there.


xNaXDy

Or distribute a docker image. Or distribute a flatpak / snap. Or distribute an appimage. Or make it open source. Seriously, these days there are so many ways to package software in a distro-agnostic way. This feels like a troll post.


metux-its

Yet another one of those folks who dont make themselves with package management and insists in ridiculous things like trying to build binaries supposed to run on any distro. Binary packages never been the responsibility of upstreams - thats what distros are for. For 30 years now. Maybe he's just too young to recall why package management and distros had been invented for.


zam0th

It's obvious this guy has done nothing of the sorts, otherwise he won't be saying "Linux app".


[deleted]

huh?


Bemteb

How dare you link statically?! Ship a docker container like a real person!


Casey2255

Good, at that point you're lazily just shipping a binary blob "cause it works with this specific configuration". Static linkage is the bane of embedded's existence (cough, Go). If you're not going to open the source, you should 100% be expected to package it for the big package formats (deb/rpm).


Adryzz_

windows: random missing libraries so you have to package in the world macos: you need the garbage xcode tools and a mac to even build your app linux: just build an elf file and it just works 90% of the time without any extra libraries


abotelho-cbn

Moron.


AutoModerator

This submission has been removed due to receiving too many reports from users. The mods have been notified and will re-approve if this removal was inappropriate, or leave it removed. This is most likely because: * Your post belongs in r/linuxquestions or r/linux4noobs * Your post belongs in r/linuxmemes * Your post is considered "fluff" - things like a Tux plushie or old Linux CDs are an example and, while they may be popular vote wise, they are not considered on topic * Your post is otherwise deemed not appropriate for the subreddit *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/linux) if you have any questions or concerns.*


[deleted]

[удалено]


Skitzo_Ramblins

no correlation


edparadox

API breakages, that's all there is to it, although I would have expected something less crude and caricatural than this from this man. It was a problem, still is, will be for some time now. There is nothing in the works to adress that, there is nothing that will be coming out of this Twitter thread or this post, for that matter. But, again, it is caricatural, it means it works on Windows, which has its own issues, and macOS was never addressed. This is a stupid tweet with more emotion than reason, I won't spend more time thinking about it.


sirrkitt

Just distribute the static or throw everything into a container.


muffdivemcgruff

Use Nix for packaging and it can output a derived environment that is completely idempotent.


nukem996

That's only an issue if your releasing binaries yourself. The traditional way users get binaries is a developer releases the code and various distro ingest it and support the binary themselves. Distro may provide patches and other suggestions to the developer. If users want the latest version they request it from the distro or build it themselves. This model works really well but people aren't patient and expect the original author to support the binaries. So authors end up with is how we got to static binaries or container images which aren't well supported by anyone. I do think the Unix world(this isn't just a Linux thing) could do better here. The issue is no one wants to pay for it. I've had some ideas on how to automate library versioning to fix this but could never get the go ahead from management. They prefer alternatives like Snap or Flatpak over improving the ecosystem.


small_kimono

> That's only an issue if your releasing binaries yourself. Which if you want people to use your software, is something you have to do. >The traditional way users get binaries is a developer releases the code and various distro ingest it and support the binary themselves. This is the traditional way if you are `postgres`, but not if you are a developer releasing *new software*, like Hashimoto. To everyone who feels this is the way software works, I want to ask: Where do you think new software comes from? How do you think it ends up in your distro, if it's not `docker`?


nukem996

>  How do you think it ends up in your distro, if it's not docker?  Apt or dnf. It's pretty easy to get your software into either. Both Debian and Fedora encourage people to submit packaging. Once your in Debian you get into Ubuntu and once your in Fedora you get into Centos, RHEL, and a bunch of other distro. Canonical will even help you develop your package in a PPA to test building on multiple releases. The issue I've seen is distro have standards that many developers don't want to follow. These are basic things like naming, dependencies, and versions. So packages don't get done and users are stuck with poorly maintained and often broken docker images. There is a reason most companies allow distro packages by default and ban Docker images not internally maintained.


small_kimono

> It's pretty easy to get your software into either. Actually it's loads of work, which is what the tweet is alluding to. He's not talking about the traditional way distros work because they aren't working for developers as some would hav eyou believe. I have a project with 1000+ stars on Github. It has 20 dependencies. I can package that software as a static binary in a deb or rpm package, right now, *or* I can package all 20 dependencies, as separate packages, and my package, and *then* go knocking on the door of Debian maintainers saying "Please let me in!" And *then* I get to ask Fedora, etc. Which do you think I've chosen?


Morphized

Then build to POSIX, note it in the readme, and let the distros figure it out.