2

Freenginx: Core Nginx developer announces fork

 7 months ago
source link: https://news.ycombinator.com/item?id=39373327
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Freenginx: Core Nginx developer announces fork

Worth noting that there are only two active "core" devs, Maxim Dounin (the OP) and Roman Arutyunyan. Maxim is the biggest contributor that is still active. Maxim and Roman account for basically 99% of current development.

So this is a pretty impactful fork. It's not like one of 8 core devs or something. This is 50% of the team.

Edit: Just noticed Sergey Kandaurov isn't listed on GitHub "contributors" because he doesn't have a GitHub account (my bad). So it's more like 33% of the team. Previous releases have been tagged by Maxim, but the latest (today's 1.25.4) was tagged by Sergey.

s.gif
It is scary to think about how much of web relies on projects maintained by 1 or 2 people.
s.gif
Not that scary when you remember there are some systems that haven't been significantly updated for decades (e.g. the Linux TTY interface). A lot of stuff can just coast indefinitely, you'll get quirks but people will find workarounds. Also this is kind of why everything is ever so slightly broken, IMHO.
s.gif
> Also this is kind of why everything is ever so slightly broken, IMHO.

OTOH, things that update too often seem to be more than slightly broken on an ongoing basis, due to ill-advised design changes, new bugs and regressions, etc.

s.gif
I am thinking with things that don't update often, we just get used to the broken parts. People learned to save every five minutes in Maya since the app crashes so often, for example. Every now and then, a PuTTY session will fill the screen with "PuTTYPuTTYPuTTYPuTTYPuTTY[...]" but it's been that way for at least 20 years, so it's not that remarkable.
s.gif
tangent but i havent seen that happen on any of my putty clients in years and i use it everyday, so i think that finally got fixed? or maybe was a side effect of something stupid
s.gif
That only helps if it stays static. For example, if the Linux TTY interface was unchanged for decades to such a degree that nobody worked on it, but then had a vulnerability, who would be able to fix it quickly?
s.gif
Perhaps someone with more knowledge can chime in. But, my impression is that there are vulnerabilities with TTY, it's just that we stay educated on what those are. And we build systems around it (e.g. SSH) that are secure enough to mitigate the effects of those issues.
s.gif
SSH was a replacement for Telnet. But any weaknesses at the TTY level is orthogonal to that, right?

Unless you mean, having thin clients use SSH as opposed to directly running serial cables throughout a building to VT100 style hardware terminals, and therefore being vulnerable to eavesdropping and hijacking?

But I think when we talk about TTY we mostly don’t refer to that kind of situation.

If someone talks about TTY today, I assume they mean the protocol and kernel interfaces being used. Not any kind of physical VT100 style serial communication terminals.

s.gif
I miss rooms of green and amber screen terminals hooked up via serial cable. As an undergrad I remember figuring out how to escape from some menu to a TTY prompt that I could somehow telnet to anywhere from. Later, I would inherit a fleet of 200 of them spread across 12 branch libraries. I can't remember how it worked except that somehow all the terminals ran into two BSDi boxes in the core room of the central library, and it had been hardened so you could not break out of the menus and telnet to arbitrary places. Over a year I replaced them all with windows machines that ran version of netscape navigator as the shell with an interface that was built in signed javascript. It was the early days of the web, and we had to support over 300 plug ins for different subscriptions we had. The department that ran the campus network didn't want to let me on the network until I could prove to them everything was secure.
s.gif
This was on HN two(?) days ago: https://news.ycombinator.com/item?id=39313170

> I wrote the initial version of SSH (Secure Shell) in Spring 1995. It was a time when telnet and FTP were widely used.

> Anyway, I designed SSH to replace both telnet (port 23) and ftp (port 21). Port 22 was free. It was conveniently between the ports for telnet and ftp. I figured having that port number might be one of those small things that would give some aura of credibility. But how could I get that port number? I had never allocated one, but I knew somebody who had allocated a port.

Emphasis mine.

Cheers.

s.gif
Where does this idea come from? I see it repeated a lot, but it's not correct.

rsh was common on internal networks, but almost never used on the wider Internet. telnet was everywhere all across the net.

ssh was a revelation and it replaced telnet and authenticated/non-anonymous ftp primarily.

And also sometimes rsh, but less importantly.

s.gif
I wonder how many of these things that are just coasting are gonna have issues in 14 years.
s.gif
Nginx is still evolving a lot though.

Eg: http3 support was stabilized with 1.25.1 , which came out June 2023.

s.gif
This isn't one though. I think the issue he is talking about is around the CVEs that came out with the HTTP3 implementation. This is an area of very active and complex development.
s.gif
Meanwhile my anaconda installation died after a casual apt-get update lol

I now believe that every piece of software should be shipped as a container to avoid any system library dependencies.

s.gif
Certainly the web can mostly coast indefinitely. There are webpages from decades ago that still function fine, even that use JavaScript. The web is an incredibly stable platform all things considered. In contrast, it's hard to get a program that links to a version of Zlib from 10 years ago running on a modern Linux box.
s.gif
> Certainly the web can mostly coast indefinitely.

I'm not sure about that, for anything besides static resources, given the rate at which various vulnerabilities are found at and how large automated attacks can be, unless you want an up to date WAF in front of everything to be a pre-requisite.

Well, either that or using mTLS or other methods of only letting trusted parties access your resources (which I do for a lot of my homelab), but that's not the most scalable approach.

Back end code does tend to rot a lot, for example, like log4shell showed. Everything was okay one moment and then BOOM, RCEs all over the place the next. I'm all for proven solutions, but I can't exactly escape needing to do everything from OS updates, to language runtime and library updates.

s.gif
this problem -- great forward compatibility of the web -- has been taken care of with application layer encryption, deceitfully called "transport layer" security (tls)
s.gif
The web is the calm looking duck that is paddling frantically. You want to be using SSL from the 90s, or IE vs. Netscape as your choice etc. Nostalgia aside!
s.gif
HTTP 1.1 isn’t really changing is it?

That and a small collection of other things are standards based and not going though changes.

s.gif
Yeah but you can just continue to use HTTP/1.1, which is simpler and works in more scenarios anyway (e.g. doesn't require TLS for browsers to accept it).
s.gif
You could have stayed with HTTP/1.0 as well. Or Gopher.
s.gif
HTTP/1, HTTP/2 and HTTP/3 are huge standards that were developed, considered and separately implemented by hundreds of people. It's built in C which has an even more massive body of support through the standard, the compilers, the standard libraries, and the standard protocols it's all implemented on.

1 or 2 people maintain one particular software implementation of some of these standards.

It's interesting to think of what a large and massive community cheap and reliable computation and networking has created.

s.gif
IME, the best software is written by "1 or 2" people and the worst software is written by salaried teams. As an end user, it's only the encroachment by the later that scares me.
s.gif
Yep. IME the only way to make a salaried team of 10 devs work efficiently is to have enough work that you can split it cleanly into 5-10 projects that 1-2 people can own and work on autonomously.

Too bad every team I've ever worked on as a consultant does the opposite. The biggest piles of shit I've ever seen created have all been the product of 10 people doing 2 people's worth of work...

s.gif
I don't worry when it's open source, as if it's that valuable someone will pick it up, or corps would be forced to. I do wish those 1 or 2 devs got more support monetarily from the huge corps benefitting.
s.gif
It is also why companies don’t buy SaaS services from single founders or small companies where risk of key people leaving is high impact.
s.gif
For the vast majority of use cases nginx from 10 years ago would not make a difference. You actually see the nginx version on some html pages and very often it's old.
s.gif
nginx from 5 years ago has some pretty nasty actively exploited CVEs.
s.gif
>> It is scary to think about how much of web relies on projects maintained by 1 or 2 people.

This is one reason maintainability is very important for the survival of a project. If it takes an extra person to maintain your build system or manage dependencies or... or... it makes it all the more fragile.

s.gif
This is your semi-annual reminder to fork and archive offline copies of everything you use in your stack.
s.gif
There's plenty of copies of the code. That doesn't help with the actual problems with the setup.
s.gif
That's why they work well. Not corrupted by corporate systems or group governance. Individuals have better vision and take responsibility.
s.gif
It's not that scary. If a project everyone depends on is broken and unmaintained, someone else will manufacture a replacement fairly quickly and people will vote with their feet.

NGINX is the de facto standard today, but I can remember running servers off apache when I began professionally programming. I remember writing basic cross-broweser spas with script.aculous, and prototypejs in 2005, before bundlers and react and node.

Everything gets gradually replaced, eventually.

s.gif
I still deploy Apache httpd, because that’s what I know best, and it works.
s.gif
You can also probably host without a reverse proxy. Also there are alternatives like Caddy. IIS!! And I imaging the big cloud would swoop in and help since their expensive CDNs and gateways will rely on it, or maybe Kubernetes maintainers, since most likely they use it.
s.gif
I think if 2 people designed most of the world’s water treatment plants, that’s not scary.

If 2 people are operating the plants, that’s terrifying.

s.gif
>freenginx.org

IANAL, but i strongly recommend reconsidering the name as the current one contains a trademark.

s.gif
They could take the Postgres naming approach.

Ingress was forked; the Post fork version of Ingress was called "Post"gres.

So maybe name this new project "PostX" (for Post + nginx).

Though that might sound too similar to posix.

s.gif
Postgres name is said to be a reference to ingres db, not a fork of ingres.

> The INGRES relational database management system (DBMS) was implemented during 1975-1977 at the Univerisity of California. Since 1978 various prototype extensions have been made to support distributed databases [STON83a], ordered relations [STON83b], abstract data types [STON83c], and QUEL as a data type [STON84a]. In addition, we proposed but never prototyped a new application program interface [STON84b]. The University of California version of INGRES has been ‘‘hacked up enough’’ to make the inclusion of substantial new function extremely difficult. Another problem with continuing to extend the existing system is that many of our proposed ideas would be difficult to integrate into that system because of earlier design decisions. Consequently, we are building a new database system, called POSTGRES (POSTinGRES).

[https://dsf.berkeley.edu/papers/ERL-M85-95.pdf]

s.gif
Isn't this a bit pedantic.

Fork vs "hacked up [Ingress] enough ... Consequently, building a new database system" named Postgres.

s.gif
"Postginx" has a nice ring to it, could be an alcoholic beverage, a name of a generation, or even a web server.
s.gif
Not necessary. It’s not like F5 is going to go to Russia and file suit against any of them.
s.gif
Maybe not today, but one day they might. Better to start with a workable long term name.
s.gif
Bump each letter in nginx and we get.... ohjoy!
s.gif
Dude, please, just create a fork & explain the name. ohjoy sounds perfect and the meaning is brilliant. This must be it.
s.gif
There was also a time where ng postfix was used to denote "next generation", so they could go with nginxng :)
This isn’t just “a core nginx dev” — this is Maxim Dounin! He is nginx. I would consider putting his name in the title. (And if I were F5, I’d have given him anything he asked for to not leave, including concessions on product vision.)

That said, I’m not sure how much leg he has to stand on for using the word nginx itself in the new product’s name and domain…

s.gif
> not sure how much leg he has to stand on for using the word nginx itself in the new product’s name and domain

pretty sure they can't really do anything to him in Russia. Russia and US don't recognize each others patents, same as China.

s.gif
Right, they will just go after the domain forcing either a rename or a move to a Russian domain
s.gif
He *is* nginx ?

https://freenginx.org/hg/nginx

I don't see it. Sure, he contributes. But in the last 3-4 years he definitely does not look like he is nginx based on that log. Or am I looking in the wrong place?

s.gif
And this is why counting commits doesn't give you an accurate picture of productivity.

(Regardless, if you scroll back past March 2020, the timeline "resets" to this past year, and you see a ton of Dounin commits. Looks like an artifact of how the hg web viewer deals with large, long-lived branches getting merged.)

s.gif
I think the mercurial log is not doing us any favors here, most of the first few pages is the history of the `quic` http/3 support branch which indeed Maxim is not working on. Scroll past it and he'll be much more prevalent. See for example the log of stable-1.24: https://freenginx.org/hg/nginx/shortlog/420f96a6f7ac
s.gif
There's something wrong with the list. It's ostensibly sorted reverse chronologically but scroll further and you'll see it go from 2020-03-03 to "9 months ago" and from there on it's all him.
s.gif
Judging from the graph view (https://freenginx.org/hg/nginx/graph), it has to do with the QUIC branch landing onto the main branch, suggesting he had little role in the QUIC development but heavy role outside of it.
s.gif
And that's how 100x developers don't get the recognition they deserve.
s.gif
Philosophically, if a lead developer is doing most of the commits on a project, then they are monopolizing both the code and the decision making process, which is a sure way to kill a project.

If the basketball or soccer team captain were also a ball hog, they'd have trouble keeping the bench full.

When you become lead, you have to let some of the code go, and the best way I know to do it is to only put your fingers into the things that require your contextual knowledge not to fuck up. If you own more than 10% of the code at this point, you need to start gift-wrapping parts of the code to give away to other people. If you own more than 20%, then you're the one fucking up.

Obviously this breaks down on a team size of 2, but then so do concerns about group and team dynamics.

s.gif
I think there are problems where this will apply to, such as crud applications, and projects where deep understanding of core components makes it difficult to scale teams horizontally as it will effectively require a hive-mind.
s.gif
> which is a sure way to kill a project

Nonsense

s.gif
you should have googled his name, and you would have known within seconds. I mean it's everywhere nginx is mentioned (or dev of it)
> Unfortunately, some new non-technical management at F5 recently decided that they know better how to run open source projects. In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers’ position.

Ah, I completely forgot F5 was involved in this, probably most of everyone else and F5 gets no money from this. Shouldn't matter to them, do they even have competition in enterprise load balancer space? I spent 9 years of my career managing these devices, they're rock solid and I remember some anecdotes about MS buying them by the truckloads. They should be able to cover someone working on nginx, maybe advertise it more for some OSS goodwill.

s.gif
I dunno about rock solid. I’ve had plenty of issues forcing a failover/reboot, multiple complicated tickets open a year, etc. But we have a sh ton of them. To be fair, some are kernel bugs with connection table leaks, SNAT + UDP, etc.

Buuuut, they have by far the best support. They’re as responsive as Cisco, but every product isn’t a completely different thing, team, etc. And they work really well in a big company used to having Network Engineering as a silo. I’d only use them as physical hardware, though. As a virtual appliance, they’re too resource hungry.

Nginx or HA-Proxy are technically great for anything reasonable and when fronting a small set of applications. I prefer nginx because the config is easier to read for someone coming in behind me. But they take a modern IT structure to support because “Developers” don’t get them and “Network Engineers” don’t have a CLI.

For VMWare, NSX-V HA-Proxy and NSX-T nginx config are like someone read the HOWTO and never got into production ready deployments. They’re poorly tuned and failure recovery is sloooow. AVI looked so promising, but development slowed down and seemed to lose direction post acquisition. And that was before Broadcom. Sigh.

s.gif
I'm very out of date so take my opinion with a grain of salt. The customer support I received from F5 when they acquired a telco product was about the worst support I've ever seen. Now this wasn't the general LB equipment that F5 has the reputation around, it's some specific equipment for LTE networks.

We'd get completely bogus explanations for bugs, escalate up the chain to VPs and leadership because there was an obvious training, understanding, and support for complex issues problem, and get the VPs trying to gaslight us into believing their explanations were valid. We're talking things like on our IPv4 only network, the reason we're having issues is due to bugs in the equipment receiving IPv6 packets.

So it's one of those things where I've personally been burned so hard by F5 that I'd probably to an unreasonable level look for other vendors. The only thing is, this was awhile ago, and the rumor's I've heard are that no one involved is still employed by F5.

s.gif
I completely get this. I feel like every product I’ve had outside of a vendor’s wheelhouse has gone that way. We just use the BigIP gear from F5 and they’re better than the load balancers we used in the past. Thank god Cisco just abandoned that business.

I can’t imagine them supporting telco gear. The IPv6 thing has me LOLing because I just had a similar experience with a vendor where we don’t route IPv6 in that segment and even if we did, it shouldn’t break. Similarly, a vendor in a space they don’t belong that I imagine we bought because of a golf game.

A thing I dread is a product we’ve adopted being acquired… and worse, being acquired by someone extending their brand into a new area. It’s also why we often choose a big brand over a superior product. It’s not the issue of today, but when they get bought and by who. I hate that so much and not my decision, but it’s a reality.

It’s also a terrible sign if you’re dealing with a real bug and you’re stuck with a sales engineer and can’t get a product engineer directly involved.

I have a list of “thou shalt not” companies as well, and some may be similar where a few bad experiences ruined the brand for me. Some we’re still stuck with and I maaaay be looking for ways to kill that.

s.gif
When was this? I worked with them 2009-2018, support was really top notch. We could get super technical guys on the call and even custom patches for our issues, but our usage was relatively simple. I contrast them with McAfee products we've used, now that was a complete shitshow as a product and support.
s.gif
The last two companies I've worked for have paid for Nginx+ since software LB is all we really need.

Handling a few thousand RPS is nothing to nginx, and doesn't require fancy hardware.

That said, it replaced Kemp load balancers, which it seems is the next biggest competitor in the hardware load balancer appliance space.

s.gif
The world has moved on in the sense that "good enough" and cloud eats into their balance sheets I'm sure, but there's loads and loads of banks and legacy enterprises that maintain their ivory tower data centers and there's nothing to replace these with AFAIK. Google has Maglev, AWS perhaps something similar, MS no idea, everyone else just buys F5 or doesn't need it.
s.gif
My org moved off nginx for haproxy after we learned that (at the time, maybe it changed) reloading an nginx config, even if done gracefully through kernel signals, would drop existing connections, where haproxy could handle it gracefully. That was a fun week of diving in to some C code looking for why it was behaving that way.
s.gif
How did you come to that conclusion? I always believed a reload spawned new workers and let the old one drain off.
s.gif
Yes I reload nginx all the time and it doesn’t drop connections. I just use the debian nginx package. Not sure what the gp is talking about.
s.gif
we went in the opposite direction, not because haproxy was bad, just because nginx had a simpler config, and i think we were paying for haproxy but don't pay for nginx.

all that said, neither drops existing connections on reload

s.gif
nginx supports graceful reloading and I’m pretty sure it has for a very long time - there are references to it in the changelog from 2005

https://nginx.org/en/docs/control.html

s.gif
Another issue with nginx IIRC is that it allows HTTP request smuggling, which is a critical security vulnerability.
s.gif
Amazon used to run entirely behind Citrix NetScaler hardware; no F5 at all. This was back in the early 2010s so I assume things have changed by now.
s.gif
Yup - there was a massive internal push to move off of SSL terminating LBs back in ~2018
s.gif
Cost.

Now, SSL termination is done at the host level, using a distributed SSL termination proxy developed by S3 called "JBLRelay"

s.gif
I'm pretty sure that AVI just wraps Nginx, even though they claim otherwise.

I think this because Nginx has a bunch of parsing quirks that are shared with AVI and nothing else.

s.gif
HAProxy is an enterprise load balancer that's available through Red Hat or other OSS Vendor. Nginx is just so easy to configure...
s.gif
HAProxy is a wonderful load balancer that doesn't serve static files thus forcing many of us to learn Nginx to fill the static-file-serving scenarios.

Caddy seems like a wonderful alternative that does load balancing and static file serving but has wild config file formats for people coming from Apache/Nginx-land.

s.gif
Just for completeness sake and probably not useful to many people, HAProxy can serve a limited number of static files by abusing the back-end and error pages. I have done this for landing pages, directory/table of content pages. One just makes a properly configured HTTP page that has the desired HTTP headers embedded in it and then configure it as the error page for a new back-end and use ACL's to direct specific URL's to that back-end. Then just replace any status codes with 200 for that back-end. Probably mostly useful to those with a little hobby site or landing page that needs to give people some static information and the rest of the site is dynamic. This reduces moving parts and reduces the risk of time-wait assassination attacks.

This method is also useful for abusive clients that one still wishes to give an error page to. Based on traffic patterns, drop them in a stick table and route those people to your pre-compressed error page in the unique back-end. It keeps them at the edge of the network.

s.gif
FYI: Serving static files is easier and more flexible in modern versions of HAProxy via the `http-request return` action [1]. No need to abuse error pages and no need to embed the header within the error file any longer :-) You even have some dynamic generation capabilities via the `lf-file` option, allowing you to embed e.g. the client IP address or request ID in responses.

[1] https://docs.haproxy.org/dev/configuration.html#4.4-return

Disclosure: I'm a community contributor to HAProxy.

s.gif
Nice, I will have to play around with that. I admit I sometimes get stuck in outdated patterns due to old habits and being lazy.

I'm a community contributor to HAProxy.

I think I recall chatting with you on here or email, I can't remember which. I have mostly interacted with Willy in the past. He is also on here. Every interaction with HAProxy developers have been educational and thought provoking not to mention pleasant.

s.gif
> I think I recall chatting with you on here or email, I can't remember which.

Could possibly also have been in the issue tracker, which I did help bootstrapping and doing maintenance for quite a while after initially setting it up. Luckily the core team has took over, since I had much less time for HAProxy contributions lately.

s.gif
True and I've made use of the Nginx adapter, but the resulting series of error messages and JSON was too scary to dive in further. The workflow that would make the most sense to me (to exit Nginx-world) would be loading my complex Nginx configs (100+ files) with the adapter, summarizing what could not be interpreted, and then writing the entirety to Caddyfile-format for me to modify further. I understand that JSON to Caddyfile would be lossy, but reading or editing 10k lines of JSON just seems impossible and daunting.
s.gif
Thanks for the feedback, that's good to know.
s.gif
A load balancer shouldn't serve static files. It shouldn't serve anything. It should... load balance.

I can see why you'd want an all-in-one solution sometimes, but I also think a single-purpose service has strengths all its own.

s.gif
> but has wild config file formats for people coming from Apache/Nginx-land.

stockholm syndrome

s.gif
I can see that. But for me, I was so very relieved to no longer deal with Apache config files after switching to Caddy.
s.gif
I keep a Caddy server around and the config format is actually much, much nicer than nginx's in my experience. The main problem with it is that everybody provides example configurations in the nginx config format, so I have to read them, understand them, and translate them.

This works for me because I already knew a fair bit about nginx configuration before picking up Caddy but it really kills me to see just how many projects don't even bother to explain the nginx config they provide.

An example of this is Mattermost, which requires WebSockets and a few other config tweaks when running behind a reverse proxy. How does Mattermost document this? With an example nginx config! Want to use a different reverse proxy? Well, I hope you know how to read nginx configuration because there's no English description of what the example configuration does.

Mastodon is another project that has committed this sin. I'm sure the list is never-ending.

s.gif
> The main problem with it is that everybody provides example configurations in the nginx config format, so I have to read them, understand them, and translate them.

This is so real. I call it "doc-lock" or documentation lock-in. I don't really know a good scalable way to solve this faster than the natural passage of time and growth of the Caddy project.

s.gif
LLMs baby! Input nginx config, output caddy config. Input nginx docs, output caddy docs. Someone get on this and go to YC.
s.gif
You're absolutely right. I'm going to do this today.

It's clear from this thread that a) Nginx open source will not proceed at its previous pace, b) the forks are for Russia and not for western companies, and c) Caddy seems like absolutely the most sane and responsive place to move.

s.gif
LLMs do a horrendous job with Caddy config as it stands. It doesn't know how to differentiate Caddy v0/1 config from v2 config, so it hallucinates all kinds of completely invalid config. We've seen an uptick of people coming for support on the forums with configs that don't make any sense.
s.gif
For just blasting a config out, I'm sure there are tons of problems. But (and I have not been to your forums, because...the project just works for me, it's great!) I've had a lot of success having GPT4 do the first-pass translation from nginx to Caddy. It's not perfect, but I do also know how to write a Caddyfile myself, I'm just getting myself out of the line-by-line business.
s.gif
It would be worth flagging in this comment that you represent F5. I didn't realize that until I found your other comment below.
s.gif
I haven't read the content of the patches to understand the impact of the bugs, but from my own experience [0] I can suggest a few reasons:

- CVEs are gold to researchers and organizations like citations are to academics. In this case, the CVEs were filed based on "policy" but it's unclear if they are just adding noise to the DB.

- The severity of the bug is not as severe as greater powers-that-be would like to think (again, they see it as doing due diligence; developers who know the ins and outs might see it as an overreaction).

- Bug is in an experimental feature.

I'm not saying one way is right or not in this case, just pointing out my experience has generally been that CVEs are kind of broken in general...

[0]: https://github.com/caddyserver/caddy/issues/4775

s.gif
To summarize: the more CVEs a "security researcher" can say he created on his resume, the more impressive he thinks he looks. Therefore, the incentive to file CVEs for any stupid little problem is very high. This creates a lot of noise for developers who are forced to address sometimes nonsense that are filed as "high" or "critical".
s.gif
"Denial of service" is never a security bug; it's a huge mistake people have started classifying these things as such to start with. Serious bug? Sure. Loss of security? Not really.
s.gif
> "Denial of service" is never a security bug

That very much depends on what service is being denied. Nginx is _everywhere_. While not a direct security concern for nginx (instead an availablity issue) it could have security or safety implications for wider systems. What if knocking out nginx breaks a service for logging & monitoring security information? Or an ambulance call out management system? Or a payment progressing system for your business at the busiest time if your trading year? There are many other such examples. This sort of thing is why availablity can be considered a security matter and therefore why DoS vulnerabilities, particularly those affecting common software, are handled as security issues of significant severity.

s.gif
Eh, it's widely considered that part of security is availability.

But I agree DoS is kind of a strawman since everything connected to a network is vulnerable to some form of DoS without extensive mitigation.

s.gif
>The most recent "security advisory" was released despite the fact that the particular bug in the experimental HTTP/3 code is expected to be fixed as a normal bug as per the existing security policy, and all the developers, including me, agree on this.

>And, while the particular action isn't exactly very bad, the approach in general is quite problematic.

s.gif
No, a MegaZone. Haven't you heard, we come in six packs now. ;-)

Yeah, very, very likely one and the same. Since 1989.

s.gif
Wow, that's a throwback. I was an ISP person back in the Portmaster era. You're at F5 now, I guess!

Can you say more about the CVE thing? That seems like the opposite of what Maxim Dounin was saying.

s.gif
Yeah, I've been with F5 since 2010 - gotta love those old PortMasters though, Livingston was good times, until Lucent took over. I was there 95-98.

I don't know what else there is to say really. The QUIC/HTTP/3 vuln was found in NGINX OSS, which is also the basis for the commercial NGINX+ product. We looked at the issue and decided that, by our disclosure policies, we needed to assign a CVE and make a disclosure. And I was firmly in that camp - my personal motto is "Our customers cannot make informed decisions about their networks if we do not inform them." I fight for the users.

Anyway, Maxim did not seem to agree with that position. There wasn't much debate about it - the policy was pretty clear and we said we're issuing a CVE. And this is the result as near I can tell.

Honestly, anyone could have gone to a CNA and demanded a CVE and he would not have been able to stop it. That's how it works.

s.gif
Oh my god, the Internet is such a small place. Good to hear you're doing well - we interacted a bit when I was running an ISP in the 90s as well. (Dave Andersen, then at ArosNet -- we ran a lot of PM2.5e and then PM3s).

And appreciate the clarification about the CVE disagreement.

s.gif
Those were great times. I learned a hell of a lot working at Livingston, because we had to. We were basically a startup selling to ISPs right as the Internet exploded and we grew like crazy. Suddenly we're doing ISDN BRI/PRI, OSPF, BGP, PCM modems, releasing chassis products (PM-4)... Real fun times, always something new happening. I even ended up our corporate webmaster since I'd been playing with web tech for a few years and thought it'd be a good idea if we had a site. Quite a way to jumpstart a career.

And the customers were, by and large, great.

s.gif
Oof. Presumably Dounin had other gripes about the company that had been building up? This seems like a pretty weird catalyst for a fork. Feels more like this was the last straw among many.

I get that CVEs have been politicized and weaponized by a bunch of people, but it seems weird to object that strenuously to something like this.

s.gif
I don't know much about this situation, but from what I've read, you were clearly in the right. It doesn't matter if the feature is in optional/experimental code. If it's there and has a vulnerability, give it a CVE. The customers/users can choose how much they care about it from there.

> Honestly, anyone could have gone to a CNA and demanded a CVE and he would not have been able to stop it. That's how it works.

I recently did exactly that when a vendor refused to obtain a CVE themselves. In my case, I was doing it as part of an effort to educate the vendor on how CVEs worked.

s.gif
> Honestly, anyone could have gone to a CNA and demanded a CVE and he would not have been able to stop it. That's how it works.

Even if third parties can file CVEs, do you think it hits different when the parent organization decides to do so against the developer's wishes? Why do he and F5 view the bugs differently? It sounds like the fork decision was motivated less by the actual CVEs and more about how the decision was negotiated (or not at all).

(PS. Thanks for participating in the discussion.)

s.gif
Personally, I think its more honest if the parent org does not try to contest a CVE being assigned to a legitimate issue. If a CNA gets a report of a vulnerability in code, even if its an uncommon configuration, they should be assigning a CVE to it and disclosing it. The entire point of the CVE program is to identify with a precise identifier, the CVE, each vulnerability that was shipped in code that is generally available.

Based on my observation of various NGINX forums and mailing lists, the HTTP/3 feature, while experimental, is seeing adoption by the leading edge of web applications, so I don't think it could be argued that its not being slowly rolled into production in places.

s.gif
> Maxim did not want CVEs assigned.

... to this specific bug in an experimental feature.

Originally I read your comment as Maxim doesn't want to use CVEs at all.

s.gif
I don't see anything more in that mail list thread beyond the post you linked too.

Where was the disagreement hashed out, so I can read more?

s.gif
Internally at F5 (where I work as a Principal Security Engineer in the F5 SIRT and was one of the people responsible for making the call on assigning the CVEs).
Given this fork still boasts a 2-clause BSD license, the corporate nginx can still make the effort to backport patches. It's certainly harder than requiring a single converged development branch, but how closely they track Maxim's work is ultimately up to them.

If nginx continues to receive more attention from security researchers, I imagine Maxim will have good reasons to backport fixes the other way too, or at least benefit from the same disclosures even if he does prefer to write his own patches as things do diverge.

Though history also shows that hostile forks rarely survive 6 months. They either get merged if they had enough marginal value, or abandoned outright if they didn't. Time will tell.

s.gif
I'm curious to see where this fork will go. The whole situation is a mess:

- nginx is "open core", with some useful features in the proprietary version.

- angie (a fork by several core devs) has a CLA, which sounds like a bait and switch waiting to happen and distro's won't package it

- freenginx is at least open source. But who knows if it'll still be around by June.

I admit I haven't followed closely this issue, but what is he talking about?

>In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers’ position.

s.gif
We (F5) published two CVEs today against NGINX+ & NGINX OSS. Maxim was against us assigning CVEs to these issues.

F5 is a CNA and follows CVE program rules and guidelines, and we will err on the side of security and caution. We felt there was a risk to customers/users and it warranted a CVE, he did not.

s.gif
This seems like a much larger story than the fork, given the install base of nginx.

For clarity are you referring to CVE-2024-24989 and -24990 (HTTP/3)?

s.gif
This is confusing. The CVE doesn't describe the attack vector with any meaningful degree of clarity, except to emphasize how you'd have to have a known unstable and non-default component enabled. As far as CVEs go, it definitely lacks substance, but it's not some catastrophic violation of best practices. It hardly reflects poorly on Maxim or anything he's done for Nginx. This seems like an extreme move, and it makes me wonder if there's something we're missing.
s.gif
Yes, those are the two CVEs I was referring to. All I know is he objected to our decision to assign CVEs, was not happy that we did, and the timing does not appear coincidental.
s.gif
QUIC in Nginx is experimental and not enabled by default. I tend to agree with him here that a WIP codebase will have bugs that might have security implications, but they aren't CVE worthy.
s.gif
We know a number of customers/users have the code in production, experimental or not. And that was part of decision process. The security advisories we published do state the feature is experimental.

When in doubt, err on the side of doing the right thing for the users. I find that's the best approach. I don't consider CVE a bad thing - it shouldn't be treated like a scarlet letter to be avoided. It is a unique identifier that makes it easy to talk about a specific issue and get the word out to customers/users so they can protect themselves. And that's a good thing.

The question I ask is "Why not assign a CVE?" You have to have a solid reason why not to do it, because of default is to assign and disclose.

I don't think having the CVEs should reflect poorly on NGINX or Maxim. I'm sorry he feels the way he does, but I hold no ill will toward him and wish him success, seriously.

s.gif
What does policy says about reporting security issues with experimental/not-enabled-by-default/unstable code?
s.gif
I think you'd have to ask Maxim. My take is he felt experimental features should not get CVEs, which isn't how the program works. But that's just my take - I'm the primary representative for F5 to the CVE program and on the F5 SIRT, we handle our vuln disclosures.
s.gif
I'm inclined to agree with your decision to create and publish CVEs for these, honestly. You were shipping code with a now-known vulnerability in it, even if it wasn't compiled in by default.
s.gif
if it's not compiled in by default, then you aren't shipping the code! Somebody is downloading it and compiling it themselves!
s.gif
Incorrect. Features available to users still require a minimum, standard level of support. This is like the deceptive misnomer of staging and test environments provided to internal users used no differently than production in all but name.
s.gif
If the feature is in the code that's downloaded, regardless of whether or not the build process enables it by default, the code is definitely being shipped.
s.gif
Yes. It's no different from any optional feature. Actual beta features should only be shipped in beta software .
s.gif
BRB, filing CVE's against literally any project with example code in their documentation...
s.gif
That's actually supported by the CVE program rules. Have at it if you find examples with security vulns.
s.gif
I've actually seen CVEs like that before, I agree that's bonkers but I have seen it...
s.gif
Given how frequently people copy and paste example code… why is that surprising? Folks need to be informed. CVEs are a channel for that.
s.gif
Pssst: People who copy+paste example code aren't checking CVEs
s.gif
You and I have very different notions of "shipped". It's open source code, it's being made publicly available. That's shipped, as I see it.
s.gif
This is an insane standard and attempting to adhere to it would mean that the CVE database, which is already mostly full of useless, irrelevant garbage, is now just the bug tracker for _every single open source project in the world_.
s.gif
This. CVE has become garbage because "security researchers" are incentivized to file anything and everything so they can put it on their resume.
s.gif
Why is it insane? The CVE goal was to track vulnerabilities that customers could be exposed to. It is used…in public, released versions. Why wouldn’t it be tracked?
s.gif
Because it's not actually part of the distribution unless you compile it yourself.

It is not released any sense of the word. It is not even a complete feature.

I am actually completely shocked this needs to be explained. Legitimate insanity.

s.gif
You're all also missing the fact that the vuln is also in the NGINX+ commercial product, not just OSS. Which has a different release model.

Being the same code it'd be darn strange to have the CVE for one and not the other. We did ask ourselves that question and quickly concluded it made no sense.

s.gif
It's in the published source code, as a usable feature, just flagged as experimental and not compiled by default. It's not like this is some random development branch. It's there, to be used en route to being stable. People will have downloaded a release tagged version of the source code, compiled that feature in and used it.

By what definition is that not shipped?

> I am actually completely shocked this needs to be explained. Legitimate insanity.

Right back at you.

s.gif
>just flagged as experimental and not compiled by default

Are UML diagrams considered in scope too?

s.gif
I guess a vulnerability doesn’t count unless it’s default lol. Just don’t make it default and you never have any responsibility nor does those who use it or use a vendor version that has added it in their product.
s.gif
>I guess a vulnerability doesn’t count unless it’s default lol.

It's still being tested. It's not complete. It's not released. It's not in the distribution. The amount of people that have this feature in the binary AND enabled is less than the amount of people that agree that this should be a CVE.

CVE's are not for tracking bugs in unfinished features.

s.gif
(not explicitly asking you, MZMegaZone) Does anyone understand why a disagreement about this would be worth the extra work in forking the project?

I'm not very familiar with the implications, so it seems like a relatively fine hair to split- as though the trouble of dealing with these as CSV would be less than the extra work of forking.

s.gif
It probably wasn't. There's likely something else going on. Either Dounin had already decided to fork for other reasons, and the timing was coincidental, or there were a lot of reasons building up, and this was the final straw.

Or he's just a very strange man, and for some reason this pair of CVEs was oddly that important to him.

I don't get it...does not he knows about angie [1]? It was created by NGINX core devs after F5 acquisition if I'm not mistaken and it's a drop-in replacement for NGINX.

[1] https://github.com/webserver-llc/angie

s.gif
angie is run by a corporate entity that could do exactly what F5 did.
s.gif
> not run by corporate entities

> webserver, llc

s.gif
Could be related to the fact that Angie offers 'pro' version: https://wbsrv.ru/angie-pro/docs/en/

From statement: "Instead, I’m starting an alternative project, which is going to be run by developers, and not corporate entities"

s.gif
Hm.

I guess this consultancy-on-a-paid-version model doesn't bother me (and clearly didn't bother the developer of freenginx while they were paying him).

But a double fork can't be good.

s.gif
I assume USA companies are by far the highest revenue source for Nginx Plus. Both of these forks seem to be based in Russia. How is a USA company supposed to pay either of these vendors for their consulting or Pro versions?

How long until F5 submits requests for domain ownership of freenginx.org, and how quickly does Angie get takedown requests for their features that look remarkably similar to Nginx Plus features (e.g., the console)?

s.gif
> features that look remarkably similar to Nginx Plus features (e.g., the console)

Its illegal for products in the same space to have similar features?

s.gif
Please compare the two and let us know if you think "similar" is the right word.
s.gif
Thanks, I was trying to find the license for the nginx console but thought it might just be part of the plus offering only.
s.gif
> clearly didn't bother the developer of freenginx while they were paying him

Clearly it did, so much so that he gave up all that pay.

s.gif
The main criticism is that it requires signing a CLA, so they might switch to a non-free license any day now.
s.gif
But anyone, including you and me, could re-license MIT/BSD-licensed open-source project under a different license, including non-free. CLA does not affect that.
Per the discussion at https://news.ycombinator.com/item?id=39374312, this cryptic shade:

> Unfortunately, some new non-technical management at F5 recently decided that they know better how to run open source projects. In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers’ position.

Refers to F5's decision to publish two vulnerabilities as CVEs, when Maxim did not want them to be published.

Is called "rage-fork" perhaps this. So proposed title: nginx dev rage-forks over security disagreement with boss company

But then perhaps he also has every right to do it, even though AFAIR the original author was somebody else.

s.gif
Rage-fork doesn’t show up anywhere in their announcement, nor does it read like they’re doing something specifically out of rage.

Everyone has a right to forking the project. Only time will tell if they get critical mass of developers to keep it going.

s.gif
Surely "Nginx" is trademarked, copyrighted, etc. A cool and collected fork would do some basic work to avoid trivial lawsuits, consider the other forks already in the space, and write up a bit on how this fork will be different from the others.
s.gif
Russia has laws on the books that allow them to exempt domestic operations from international IP enforcement and to nullify any damages if the entity has a connection to an "unfriendly state."
s.gif
It's worth pointing out that Maxim Dounin is, by himself, likely critical mass for Nginx. Since he started in 2011 he is by far the most active contributor to the codebase.
s.gif
Why does the identity of the original author matter here?
s.gif
In my opinion the original author did a really good job, so I found it interesting to know where and whether he might continue his vision.

Edit: I see now from the hg history that Igor hasn't been coding on Nginx for a decade actually.

s.gif
Indeed, the original work done by single dev (Igor) to get the nginx project running was very impressive timewise, and as a volume of code produced. I can't really recall why he left, but with other comments around the thread implies such forks have happened more than once.

As a sidenote I believe the people who start projects that they themselves run in excellent manner, should be praised, supported, noted and there is nothing more for their identities to matter. It very much matters some particular person with weird nick burntsushi created this wonderful tool rg, and kept growing it for long time. Besides, I can bet for projects such as Cosmopolitan C, it absolutely matters that jart started/did it.

s.gif
Thanks, I've never seen this fork mentioned before. This alone is compelling:

"Simplifying configuration: the location directive can define several matching expressions at once, which enables combining blocks with shared settings."

s.gif
Also owned by a for-profit company who offers a pro version.
s.gif
Maybe a coop of sorts could be formed where they pull in funds from sponsorships. A non-profit maybe. Devs could "lease" themselves to corporate sponsors and work on the project + some percentage time towards features they need. Sponsored development..

IDK could be a way to do it, pay the bills and some, and also limit the negative impacts public business or VC funded growth startup.

What a coincidence, some days ago I was reading some HN posts related to lighttpd and I found [1]. The link is dead and it has inappropriate content, so use arhive.org. The author doesn't go too much in detail of why nginx being purchased is a problem, but in how to configure lighttpd. And the first comment predicts the hypothetical case of F5 being problematic.

[1] https://news.ycombinator.com/item?id=19413901

s.gif
I have been using lighttpd which can also host static content and do proxying, on top of those lighttpd supports cgi/fastcgi/etc out of the box as well, and it takes 4MB memory only by default at start, so it works for both low end embedded systems and large servers.
s.gif
I've recently needed to build a docker image to run a static site. I compiled busybox with only it's httpd server. It runs with 300kb of ram with a scratch image and tini.

I didn't compile in fastcgi support in to my build, but it can be enabled.

s.gif
yes busybox httpd or civetweb is even smaller, both around 300kb.

for tini you mean https://github.com/krallin/tini? how large is your final docker image, why not just alpine in that case which is musl+busybox

s.gif
I used it to avoid having to learn lots of stuff about web configuration that bigger servers might require. Between lighttpd and DO droplets, I could run a VM per static for $5 a month each with good performance. I’m very grateful for lighttpd!
https://my.f5.com/manage/s/article/K59427339

All F5 contributions to NGINX open source projects have been moved to other global locations. No code, either commercial or open source, is located in Russia.

yeah, yeah

I'm hoping the fork will allow having code comments.
It seems every time I read about a project being forked, they use the (probably) trademarked name in the project's fork, just to need a rename a few weeks after.
F5 closing moscow office: Is this a result of US sanctions?
Tangent, but I got curious about contributing so I went to the Freenginx homepage, it looks like this project will be organized over mailing list. I would love if someone would create a product that gives mailing list a tolerable UI.
s.gif
SourceHut? It’s a forge organized around an email rather than pull request workflow.
Time for me to slowly start looking for an alternative.

There was a time when I wanted to move away from it and was eyeing HAProxy, but the lack of the ability to serve static files didn't convince me. Then there was Traefik, but I never looked too much into it, because Nginx is working just fine for me.

My biggest hope was Cloudflare's Rust-based Pingora pre-announcement, which was then never published as Open Source.

Now that I googled for the Pingora name I found Oxy, which might be Pingora? Googling for this yields

> Although Pingora, another proxy server developed by us in Rust, shares some similarities with Oxy, it was intentionally designed as a separate proxy server with a different objective.

Any non-Apache recommendations? It should be able to serve static files.

s.gif
That's a third party plugin, not core Caddy.
s.gif
And?

(That isn't about Caddy, rather a third-party plugin.)

s.gif
Have you finally decided to match FQDN URLs correctly?

I'd love to get rid of the part of my clients' codebase that starts with // workaround for broken caddy servers

s.gif
I mean I’m not sure how it’s good to want to move to a dev who is against CVEs and disclosures…
s.gif
I think people are seeing this as a very generic "big bad globocorp destroying OSS community", and not moving past the headlines. I'm with you, this seems like a foolish thing to decide to fork the project over. Probably there is other conflict brewing, and this was just a convenient opportunity.
s.gif
Did I miss something regarding that Maxim didn't want CVEs and disclosures? I was not aware of this. And F5 are the ones wanting to add the CVEs (as happened in the announcement which was released an hour earlier)?

I could have sworn that I've read about Nginx CVEs in the past.

s.gif
Well it seems he didn’t think this particular thing should have one despite the criteria being clear.
s.gif
I'm going to third the suggestions for caddy, I've replaced nginx as a reverse proxy in a couple places with caddy and it's been so much easier to maintain.
wondering also whether Igor and Maxim are ok, what w/ the geopolitical situation there.
Anyone have more info about the changes nginx made?
If I ever need nginx I'll use freenginx. But funny enough all my services run in Traefik these days. 15 years ago Apache httpd was the norm, and lately nginx has been, and now I can't even think of a reason to use it.
Can it un-swap the behavior of SIGTERM and SIGKILL please?
s.gif
Swap SIGTERM and SIGQUIT behavior? I don't think you can catch SIGKILL.
s.gif
Correct. The only other untrappable signal is SIGSTOP.
How the heck am I supposed to pronounce that? "Free-en-gen-icks"?
Curious how to support Maxim despite Russia complications.
seems like an annoying but necessary thing, so lets give the original a quick death and migrate to freenginx

Infrastructure like that should not be run by for-profit corporations anyway, it will always end up like in this case sooner or later

s.gif
As someone who used Apache 1.3.x through 2.x heavily from 2000 to 2015, I respectfully disagree with this statement. Nginx and Traefik are easier to configure, have better communities and in most cases perform better.

Traefik Opensource is my go to for almost all of my use cases theses days and I have never stopped and said hmmm I wonder if Apache would do better here. It is that good.

s.gif 24 more comments...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK