ZeroVer: 0-Based Versioning
source link: https://news.ycombinator.com/item?id=28154187
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
So it was a bit disheartening that the founders never bumped the version to 2.x once the rollout was achieved. It's perhaps a bit nitpicky, but it felt like the work wasn't properly appreciated.
I'm a one man shop though so I never branch for major/minor releases.
Honestly, for my one-man projects I use 0. to indicate "there is absolutely no backwards compatibility guarantees because I'm still fucking around" and 1. to indicate "this is in prod and I'm confident about it" (with attendant discipline on how semver major/minors are supposed to be used).
> Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.
> Version 1.0.0 defines the public API. The way in which the version number is incremented after this release is dependent on this public API and how it changes.
> If your software is being used in production, it should probably already be 1.0.0.
I say that mostly jokingly, but stuff like this was really annoying around the turn of the century, in a death by a thousand cuts kind of way.
Please, just put the full year in. It's only two more digits, and will prevent the older people that see it from building a little bit of rage up every time they see it.
But for common libraries SemVer feels good solution for not breaking the main products and helps making developers to think about breaking changes etc.
you can't prove me wrong
I'd bet everything I own against that
It is also much easier to reference when talking with other devs, users, etc.
We all know the calendar and a date is much easier to remember.
Straight increases 5, 6, 7 ar also easier for user to reference.
An internal rewrite where all the "old bugs" are fixed, but minimal new features are added may feel like a 2.0 for those who worked on it, but for external customers it's the same tool, with the same functionality, just maybe looks a little different.
A 2.0 is often heralded with marketing fanfare, so it needs justification.
I'm not saying it's right, or that one rule fits all; I've seen it first hand and feel your pain.
Versions were bumped now and then, without real major changes.
They use this as marketing gimmick to create buzz that something major is being released but actually it was same old stuff just not ready for primetime.
When I have to use a 0.x-versioned package because it's a (hard or soft) dependency of some popular framework/platform/tool that "everyone" is using... I feel personally insulted. Like they're saying "no, we're not willing to say this is production-ready, we're not willing to say that we're not going to break backwards compatibility every month or so in a way you can't predict from the version number so can't do automated dependency updates ever, but, hey, everyone else is using it, what's your problem?"
It's like the ultimate embrace of the typical HN "open source maintainers owe you nothing and you should expect nothing of them" argument. Which is true, but if all packages literally made no effort to meet user needs and we had learned to expect nothing from them, we would never be using open source....
If something is 'production ready', it's definitely past the '0.y' stage by definition. At least if you actually stick to semver, as '0.y' is explicitly defined as reserved for initial development.
Reply: OPEN-SOURCE DEVS DON'T OWE YOU ANYTHING!!!
Come on, at least try to engage with what's being said.
OP: I have a problem with open source projects where the marketing and version number don’t match. If it looks like a prod, and smells like a prod, it’s a freaking prod. You owe us a 1.0.
Reply: Okay, but maybe don’t go gobble up any project out there that has a nice landing page and docs and get fooled into thinking it’s a prod. The devs are probably using semver to try and communicate with you. If the project smells like a prod to you but is only version 0.8 and that makes you antsy, then don’t be part of your own problem. Don’t use it in prod. Simple.
Also plenty of software is stable enough to be used as 0.8 even in prod even if it’s not feature complete and the maintainers don’t consider it 1.0.
If you want your 0.8 to magically become a 1.0, have you considered sponsoring the project so that some devs can work full time on it? Hmm.
> ZeroVer is satire, please do not use it.
We use open source because through either corporate incentive or altruistic passion, some packages do put in the effort to meet user needs. That happens whether you think they owe you anything or not, which they don't. All you achieve by raising the floor of expectation is discourage non-production-ready open source contributions, which I think are still valuable regardless of how unstable they may be.
I tend not to release projects just because I think people should expect something from me if I do.
On another note, I'm totally down to accept that OSS projects be malleable, but it does irk me when someone:
1. brags a ton about their project just to find it's really not that great or
2. changes things with almost no concern for who it will affect and how or
3. refuse to change things for some ideological reason when tons of people are asking for it ("that's not the right way to do it!" "you don't know my project")
Wait, insulted by whom? The package developers are surely not obliged to develop their package to a standard that satisfies you, and it's hardly their fault if someone else feels that their package is useful enough to integrate it into something popular. Insulted by the popular framework that uses it … maybe, but you are not willing to forgo the use of 0.x-versioned packages even when it is a point of principle for you, so why should the developers of those frameworks forgo it when it is not a point of principle for them?
Project Stars Released Releases Current Version 0ver years
React Native 96,747 2015 359 0.65.0-rc.2 (2021) 6.3
...lolGod forbid tor, sklearn, react-native release a v1 -- people might expect things!
They only used single letters:
* https://www.openssl.org/news/changelog.html
The IEEE Ethernet standards are using double letters though:
> * https://www.openssl.org/news/changelog.html
From that link:
> When a release is created, that branch is forked off, and its changelog is also forked. For example, none of the changes after 0.9.8n appear in the other logs, because 1.0.0 was created after that release and before 0.9.8o.
If you look at the changelog in the v0.9.8 branch, you'll see they got up to 0.9.8zh:
https://github.com/openssl/openssl/blob/OpenSSL_0_9_8-stable...
Tor has been around for 17.3 years, now on version 0.4.7.0
Seems getting rock-solid software to >1.0 takes a rock-solid effort.
I guess maybe it's monthly if you count the patch releases? But you can't really claim to "just increase the number by 1 each time" when you have two separate numbers which get incremented for different reasons.
Like at some point just e.g. move from 0.69.0 to some arbitrary number like 7.0.0 or even 70.0.0.
Also, at $WORK we have three separate but related products, each with separate version numbers that we jumped at some point to reach the same value across all 3. One of the products jumped from 3.x to 9.0 I believe.
The reason was that they committed to never to any "major braking change" i.e. that there would never be a version 2.
At the same time the backwards compatibility guarantees didn't work always as good as some people liked so they decided to move from semver to something which is like "only do minor releases, but sometimes imperfect making them somewhat major releases but also somewhat not".
What I advocated for differs in that I want to keep semver. So when you move from e.g. 0.32.0 to 32.0.0 you still would only inc minor version for non braking changes and the patch version for patches.
Through this means that you now can denote path updates, as in 0.32 the minor updates are like major updates and the patch updates are like minor updates.
This can be especially useful when development release speed slows down and you want to make a new release with e.g. just fixing some API docs or just non API exposed bug fixes.
2. what if you need multiple releases in a month, in a day?
3. if it works, don't touch it
The closest ZeroVer comes is to quote Tom Preston-Werner, "If your software is being used in production, it should probably already be 1.0.0." which is something, but could perhaps be dismissed more thoroughly as undesirable or mistaken in this document.
Given the context, I thought for a minute that you were talking about Apple's presentation of the rationale for a successor to Objective-C.
I think, it's a sign of hybris if a project owner goes 1.0 too soon.
We went v1 once we decided our software was ready for other people to use. It meant we'd ensure compatibility via upgrade scripts, and it meant we wouldn't do irresponsible things like tell users "this release requires you to drop your database and start over"
IMHO, if you stay pre 1.0 for years across many releases it means you have too wide a scope for what v1.0 should be.
Essentially, your MVP is v1.0 since that the first viable product you release.
But what sometimes happens is that people imagine v1.0 as being the full vision with everything and so they never really reach it. And of course sometimes it seems that there is no sensible explanation at all for why a product is still pre 1.0 to the point of being ripe for satire, indeed (and ZeroVer is really on point the way it satirises this!).
> Most experts have come to agree, for all their complexity and absurdity, Kafka's writings have been influential, despite the prevalence of bugs.
I'm loving this.
It is right under your nose. You probably typed or pasted one or these special version strings in today or you will later. Or you sent one to a colleague in a slack message this week most certainly.
It is of course the git commit hash!
Suck all meaning out of your version by using the commit hash. Never worry about which digit to increment (semver, 0ver) or what date it is (calver) because who remembers the date.
Also has the advantage that it’s easy to check out the source code for a version!
Use git log and pipe that to a file in /var/www and you have release notes with the version numbers!
edit 1: (sorry wrong one) 085bb3bcb608e1e8451d4b2432f8ecbe6306e7e7
edit 2: (ahh, sorry again, this is my latest version) a11bef06a3f659402fe7563abf99ad00de2209e6
edit 3: (this is the one, definitely) ca82a6dff817ec66f44342007202690a93763949
I prefer to version with the file size, which is strictly increasing if you follow those best practices:
- never delete any of your valuable code (commenting out is OK)
- never break up code into several files (makes it simpler to read).
So how does this work if you want to change a line in a function? Do you just comment out the line and write a new version below it? i.e. if I want to fix a bug in my code, I can't just fix it, I have to add a line of code to my program.
How does this system work in practice? The way I code, it seems like this would lead to a huge file that is mostly commented.
Get the number of commits with `git rev-list --count HEAD`.
Get a label that uses the most recent tag plus the number of commits since and the current commit hash with `git describe` (read its documentation, it’s got a few nice knobs to match whether you use unannotated tags, annotated tags, branches, &c.).
This sort of thing is used in Arch Linux packages based on Git repositories, things like this:
pkgver () {
cd the-git-repository
git describe --long 2>/dev/null | sed 's/\([^-]*-g\)/r\1/;s/-/./g' || \
printf "r%s.g%s" "$(git rev-list --count HEAD)" "$(git rev-parse --short HEAD)"
}
… which will give you versions like “1.2.3.r45.gdeadbeef” for commit deadbeef which is 45 commits past the one tagged 1.2.3, or “1.2.3.r0.g01234567” for commit 01234567 which is tagged 1.2.3, or “r1234.gdeadbeef” where you have no tags and 1234 commits from the root(s) until HEAD which is at commit deadbeef.
Contrast: v0.4, v1.0, v1.0.1, v.1.5 versus a bunch of SHA-1.
Eventually people will start random commits like "random commit to change hash" when commits accidently result in racist or taboo terms lol
The advantage of semantic versioning (SemVer) over git hashes is… well… semantics. You can immediately identify if the next version introduces a breaking change, for instance. This without mentioning ordering, and being easily able to tell versions apart - e.g: which (if any) versions are different in these three versions?
c26cf8af138955c5c67cfea96f9532680b963628, c26cf8af130955c5c67cfea96f9532680b963628, c26cf8af130955c5c67cfe9a6f9532680b963628
You can get easy check-outs and git-logs by using git tags for each version.
Yay, every version is a patch and who cares if there's breaking changes.
So now I need full QA before I pull in every security patch: a lovely idea in theory but in practice it's just a major motivator for engineers to leave their dependency tree to grow horribly out of date and littered with known vulns.
> ... or what date it is (calver) because who remembers the date.
Have to agree with this though, calver is pointless.
All it does is caters to the general bias people have toward "new is better" and "well maintained = frequently added new features", neither of which are true.
Semantic version is mostly bullshit anyway. You can never guarantee that a non-breaking change is non-breaking for every single one of your users. Anecdote: Just yesterday our entire CI process was taken down due to a bump in the AWS ebcli package from 3.20.0 -> 3.20.1. This should have been a non-breaking change, but instead resulted in an entire days worth of release management being grinded to a halt.
Pin your versions. Test all upgrades.
Semver brings net benefit for the times when it is applied correctly. For the rest, we do our best to test as much as we can.
Just assuming you can upgrade something from 3.20.1 over 3.20.0 because it just happened to release 9 hours ago is a recipe for disaster. Pin your version to 3.20.0 - regularly review your dependencies, upgrade and test - THEN roll it out system wide. Don't just assume something is going to work and roll it out to everyone without checking because some arbitrary versioning scheme says it should.
Be thoughtful. Methodical. Consistent.
You are misunderstanding the scope of SemVer. It's purely about intentional changes to public interfaces. It should go without saying that no versioning scheme can ever hope to indicate whether or not new bugs have been introduced or uncovered that impact your consumer of said interface, nor can it possibly anticipate potential bugs in consumers that could be triggered by any changes that result in valid but previously unseen data being returned. It also doesn't say anything about the update process itself. If an update fails in such a way that your consumer can't use the interface that has nothing to do with the version number.
No one reasonable has ever claimed that SemVer could allow you to just YOLO your updates without testing. When implemented properly it provides a comfort level for updates that should give you an idea of how closely you should look at the changes being made and how they might impact whatever you have consuming that interface.
I've found calver useful for applications. It makes it clear how old the version you're running is. IntelliJ and Windows both use it and I prefer it to a version number you have to look up.
The utopian ideal for apps in cases where someone might choose to use an older version (presuming LTS) would be somehow naming the release based on features, but that's not practical so date-based versioning is a really good proxy here.
Calver's uselessness only really applies to software dependencies.
This is incorrect. You can create a new commit with a different hash, but the commit hash being a checksum of the commit contents will never change for the same commit.
It will if the parent changes, for example on a rebase. Now, it is up to discussion on whether that is a different commit (technically the parent hash is part of the commit), but most people probably consider the content changes to be the actual commit.
So I don't disagree with you; if we're being pedantic, the commit hash never changes for the same commit [0]. I just pointed out that what someone without an in-depth knowledge of git would intuitively consider to be a commit can indeed have changing hashes.
[0] There does not seem to be an authoritative definition for the git-type of commit, so I guess we need to go with "a commit is what git considers a commit".
That said I wish developers realised more often, how meaningful version numbers are for the users. Zero as a major version screams from a distance: "we're just testing stuff out, expect features to be added and promptly removed because they shouldn't be there, this is not production-ready, use it at your own risk".
That made me laugh out hard :D
I make my point by always numbering my first prod release as 1.0.0.
It is never going to be perfect and it is always going to be in state of flux.
Why try to push responsibility onto client by telling them "Guys, we have warned you, this is still version 0"?
I think the reason projects get stuck at 0.x is because of interface stability expectations. When you're at version 0, you have the freedom to realize you made a bad choice and break interfaces to improve it in the long run. It's nice to know you have that freedom and that you're not committing to maintaining something in a state that is painful to work with.
If anything I think perpetual v0 dilutes the concept as gets people used to projects being stable at v0 rather than considering them alpha
(Though, on the other hand, maybe it's more just honest signaling of expectations. "We are a 0.x project, we value developer speed and a clean code-base more than backcompat or stability for users. We will NEVER grow up!" We could call it Peter Pan versioning.)
This really should be the term for projects that are perpetually releasing 0.x versions.
Maintainers are free to release their software under whatever terms that they wish, but if you actively encourage people to use your software and do not want to maintain backwards compatibility, then IMO, you should be very honest about that.
It says that there's no universal agreement on this one thing, and while if you want to sell to certain people you should tailor your versioning appropriately, there's zero point in assuming a single interpretation. If I use CalVer some people who think there's only one way to do this may assume the product has had 2021 major version releases.
That would be a silly assumption.
Similarly to https://www.hyrumslaw.com
I use ZeroVer on documents that are in a draft state to signal that they're draft, and the version drops once the doc is final.
SemVer is great for protocols, as it signals compatibility promises, and the ZeroVer alias of a protocol version means it's in development (there be dragons).
CalVer is great for software releases, operating systems, etc, as it makes debugging/triaging easier.
Whichever you use, version your protocols and releases please. It's an incredibly common problem with software, that people don't think about the value of versioning and the impact it has on parallelism in engineering/development.
Versioning. Do it.
Maybe being perpetually at 0 gives you the "It's beta software, you need to update bro" defence against maintaining a stable API, and all the extra overhead that comes with it.
Yes and yes. But isn’t that kind of the point of good satire?
That doesn't mean intentionally breaking backwards compatibility but also this isn't my paid day job so I might do it and I'm not going to spend my evenings compiling release notes / migration guides for free.
In this way it's an effective anti-big-corp shield since a lot of enterprises have dumb rules about using beta/alpha versions. I think it's a strong signal where people can't just use a beta or pre-release version of some code that they're a) not auditing their dependencies and b) probably work somewhere that has the money to pay for that level of service but just want you to do it for free.
But our versioning scheme doesn't include zero.
Not even for minor version numbers.
As we (rightly) believe our customers don't trust them (for good reasons).
Perhaps the correct sentiment would be "If you're not used to reading, you'd be surprised that 1.10 comes after 1.9 and vice versa".
I don't know what an average German would make of "1.10", really. I've only seen that as "subsection 10 of section 1 of a book".
Both of these have seen 1.x releases.
That said, I like to think they did 1.x releases specifically because they were hoping to get off this Wall of Shame.
The only pass terraform gets is that it was the least worst of IaC at the time. And even now in the day of k8s, on-cluster CI/CD, and AWS-account-per-team, something has to provide the initial k8s cluster, VPC, Client VPN, etc. before your devs can use an account.
There might have been some pain in specific providers, but at the same time those updates were going on, all three major cloud providers rewrote their own APIs.
Everything since 0.12 has been literal sunshine and rainbows flying out of our butts. The tool is _that good_ now.
I’m not sure what you mean by “all three major cloud providers rewrote their own APIs.” Azure[1][2], AWS[3], and Google[4] are all maintained by Hashicorp. In fact, if you peruse the issues you’ll often see PRs opened by employees of the respective providers trying to fix blocking issues and they often devolve into literal begging for Hashicorp to respond and at least tell them why something hasn’t been merged. I know one blocker[5] actually cost Azure a very substantial customer as it languished in Hashicorp’s queue.
Hashicorp’s constant refrain of “Well it’s a 0 version software” while selling enterprise support and constantly shilling their wares as production ready across the entire DevOps space was dishonest.
I appreciate the position they were in and I appreciate even more their attempt to at least put out a good PR move with their 1.0 release. We will see how well it holds up over the years.
What you call “that good” I call “better than everything else but still byzantine and hellish to deal with every time someone DMs me, ‘hey, you know terraform right?’”
1: https://github.com/hashicorp/terraform-provider-azuread
2: https://github.com/hashicorp/terraform-provider-azurerm
3: https://github.com/hashicorp/terraform-provider-aws
4: https://github.com/hashicorp/terraform-provider-google
5: https://github.com/hashicorp/terraform-provider-azurerm/pull...
That's an issue with those providers' APIs and how their Terraform provider was architected. Most of the other Terraform providers were smooth-sailing during the same period, save for the major challenges involved in updating 0.11 to 0.12.
You're putting the blame in the wrong place. In the case of all three of those cloud providers, the companies' own employees plus outside contributors maintain the terraform providers, not HashiCorp. HashiCorp gets involved but mostly to resolve errors in Terraform itself.
Saying they are maintained by HashiCorp is completely incorrect. They are part of HashiCorp's repos (because they are official) but in each case the core contributors are people from AWS, Azure and Google.
My company has large accounts with all of these, I contribute to them myself and I know(/knew. Dana @ Google moved onto another role and Google hasn't introduced me to her replacements yet) the maintainers of all of them personally. Don't look at who owns the repo, look at the contributor lists.
Like yes, our internal provider was a pain point and we own that but the rest of the drama around providers was just weird. The work to move over 200 repositories through the hoops to keep them updated, especially mature services that may not have been deployed for several months, was difficult to automate and very brittle even when it was.
It broke down at scale and really no one should have been using it that wide spread before it was 1.0.
Hashicorp’s whole treatment of terraform 0.x was horrendous and constantly broke everything, all while they said it was production ready. You can blame whoever you want but the total lack of stability and easy upgrade paths and constant manual fiddling and reviewing output from ‘upgrade0.11’ type commands was ridiculous and a massive time sink for our org.
Also the treadmill is really not any different than integrating with _ANY_ Google service. In fact I'd say it's an order of magnitude better. Google has set a standard of breaking changes without notification and if that's one of the providers you're using then I understand.
And well, if it was Azure (as it likely looks to be) the state of their public facing APIs is/was an absolute fucking mess and the preferred way to do anything in their system still seems to be using the UI. I've talked to several people at Microsoft at Azure teams responsible and there's multiple compounding problems there. For one you have 200+ engineering teams with no unified approach to exposing services. Then you've got multiple regions in their cloud that for years didn't have the same authentication system, didn't have consistent features between regions, etc.
There's very little you can lay at Hashicorp's feet for this when the underlying systems themselves have very poor automation.
And then you talk about having 200+ repos and services that haven't been deployed for months and all I can say is the consensus around the need for CI/CD is over a decade old now and infrastructure needs these things just as much as code does.
100% of my Terraform is in a CI/CD pipeline. Yes it was a lot of work to set up, but the alternative is nothing but problems. Terraform is just a tool. It's not a panacea. It will not make all of your problems go away -- it's up to the craftsman how good it is.
Honestly, nobody gives a damn about software version numbers outside the developer world.
Do you think your endusers care about semantic or zero versioning? Probably not.
I liked the Ubuntu (or previously msoffice) approach, as people can predict when new software is going to be released, and they immediately know how old an installation is.
Maybe someone should propose a year / datetime based versioning scheme here...
https://github.com/fail2ban/fail2ban/commit/3f5c382a988bb21f...
Well, the master branch is considered 1.0 at least. Not sure if there's been an official 1.0 release though...
The linear version history of a RCS file goes 1.1, 1.2, 1.3.
If we shoot a branch off, say, 1.2, that becomes 1.2.1.1, 1.2.1.2, 1.2.1.3, ...
Switching that dead horse to zero-based would have been a solution in search of a problem even in its heyday when it was considered viable. (BSD people, feel free to read that as "today").
We don't do arithmetic on version numbers, or not any that involves higher power operations like multiplication so the origin of the numbering doesn't matter.
The components of versions are not always numeric anyway. Is 1.3.A zero-based or not? Is A the zero of the alphabet or the one?
Versions for creative works have been traditionally one based. The first edition of a book is edition 1. The only good thing about zero is for indicating an alpha or beta version not considered to be released/published.
As @twobitshifter (facetiously?) & @arcatek wrote, it messes negatively with the minds of developers.
And, for decades we have been training non-technologists 0. is not production, 1. is better than 0., and 2. is better than 1.
This entire section reads like a joke post
> 0verview Unlike other versioning schemes like Semantic Versioning and Calendar Versioning, ZeroVer (AKA 0ver) is simple: Your software's major version should never exceed the first and most important number in computing: zero.
A down-to-earth demo:
YES: 0.0.1, 0.1.0dev, 0.4.0, 0.4.1, 0.9.8n, 0.999999999, 0.0
NO: 1.0, 1.0.0-rc1, 18.0, 2018.04.01
In short, software versioning best practice is like the modern list/array: 0-based.
We'll leave it to computer scientists to determine how expert coders wield the power of the "zero-point" to produce top-notch software. Meanwhile, open-source and industry developers agree: ZeroVer is software's most popular versioning scheme for good reason.
> Franz Kafka, who lived as an author in turn-of-the-20th-century Austria
Kafka lived his whole life in Prague, Czech Republic.
Yes, it was at the time within the borders of the Austrian-Hungarian Empire, but calling him Austrian is irksome.
Kafka had been dead for 70 years when Czech Republic came to existence. He even lived only a handful of years in Czechoslovakia since he died young. So saying he lived most of his life in Austria is more correct, than assigning him to an entity which didn't exist at all at the time.
You can't shorten the Austrian Hungarian Empire to Austria or Hungary. It's both lazy and wrong.
Also, the Czechs fought hard for their independence and achieved it during Kafka's lifetime. In the same way, literally no one says, "George Washington, Ben Franklin and Thomas Jefferson, who lived in 18th-century England..." We don't call them English, despite having been born in a colony of England, because they identified themselves as American and were officially recognized as that in their lifetime.
I have multiple sub components of our system that are effectively ZeroVer, not because I love zero ver at all, but because basically a simple linear integer would have worked, so I just let the patch field roll so that Debian/Apple/Elixir/whatever tooling stays happy.
For customer facing stuff, I've moved to calendar versioning (2021.08.12). It's easier and no or more less effective than arbitrary and debatable decisions to encodify some value when major/minor/patch changes.
> No due date
> Last updated over two years ago
Looks like a classic case of ZeroVer!
...software versioning best practice is like the modern list/array: 0-based.
elsewhere in TFA:
Welcome to ZeroVer 0.0.1.
Anyone familiar with zero-based lists surely would have suggested "0.0.0"?
The moment your software is production ready it should not take long until you use major version numbers.
If you are afraid of users expecting no change breaking after 1.0 don't stay below it as the user also expects 1.0. instead brake expectations by avoiding 1.0 and 2.0 specifically. E.g. jump from 0.69.0 to 69.0.0 or some other arbitrary number like 42.0.
It can mean that the authors think that they are not done with it.
In some cases, the cause can be a insufficiently defined scope, and feature creep can be a consequence.
In other cases, authors might have the correct intuition that the first impression can be very important, so going out and releasing Version One can cause great anxiety.
With this, it's trivial to generate new version numbers, there's no version number angst, and version numbers are inherently ordered. Detractors will argue that this system doesn't convey compatibility or magnitude-of-change information, but that's a feature.
“One shall not speak of 1.0!” (Not an actual quote).
If 0ver works for your use case, go with it, by all means. It’s sortable (“nat-sortable” to be more precise) and it will get the job done. And so are many other versioning schemes.
Think about your release cadency, how will you release new versions, patches and fixes and then pick one that makes sense to both humans and your code.
Usually there are packages with big api differences for the first 3 versions or so. The major version number is used to discuss bugs, development, etc. New versions usually entail a beta period that results in two active versions for a while.
This would allow the major version to be used more often for breaking changes to help with automated dependency update tools.
” The first version of Enlightenment was released by Rasterman (Carsten Haitzler) in 1997.[5]
Version 0.17, also referred to as E17, was in development for 12 years starting in December 2000[6] until 21 December 2012 when it was officially released as stable.”
He set up my Linux login with Enlightenment before many people had even heard of it. My fellow students in the same year were using some hideous black & white (not even greyscale!) window manager, meanwhile I had this awesome colourful setup that felt like it was from the future.
Fun times...
If someone says they're 0.11.4 and they broke the API 115 times in major, non-compatible ways over the last decade, they're 115.x.x, period. To assume otherwise is just admiring window dressing.
The canonical URL is https://0ver.org/zerover_0_based_versioning.html
The entire site only has a single blog post.
Go figure.
The minor numbers should always start at zero.
The major number starting at zero lets us express that the program is not released.
I.e. we are actually using 1-based versioning for the major number, and are reserving the 1 for the first stable release when the beta program is considered to have shipped.
I'm still on 0.9... , for, y'know, religious reasons. 18 years on a 0ver is hard to quit!
(Seriously, I'm not sure the 1.0 is in the Kubuntu repos for my version).
It is way easier to compare something like 1.0 & 2.0 than it is to compare 0.1.9 & 0.2.0-rc.2
Plus then you get the added advantage of Major/minor versions.
I’ll continue using ChronVer. https://chronver.org
(Full disclosure, this is my site)
{major release timestamp}.{minor release timestamp}.{last patch timestamp}, which winds up like: 1542051468.1576431468.1613410668
Very clear and easy for users to understand.
I'm also happy that its list of notable ZeroVer projects includes Dwarf Fortress, currently at v0.47.05, 15 years after the first version was released.
*edit: yes, published 1 april 2018
Consider you’re on v1.2.1 of some package, pinned to your requirements as ”^1.2.1”.
A security issue gets fixed, v1.2.2 is released, your CI/CD can manage this upgrade automatically.
A new feature is introduced, v1.3.0 is released, your CI/CD can still manage this upgrade automatically. Developers know that there are new features because the minor version was incremented.
But then, some core API refactoring is done to support new integrations, which happens to break backwards compatibility. Now, v2.0.0 is released, and it is obvious to developers that something major has changed. Your CI/CD does not do this upgrade automatically.
So in the end we would still end up not trusting it and rely on testing (hopefully as automatic as possible) before we could merge the upgrade.
In the end we just switched to using timestamp-based version numbers and just try to upgrade often enough so each incremental change is small. And try to have good automatic tests that can do most of the regression testing for us.
If my CI says a major release broke my build, I think "OK, I will put this aside and schedule time to deal with it based on how important it is to upgrade, maybe I'll take a peak at the release notes (breaking changes section) now to get a sense of what might have broken and how much work it might be." And reporting a bug upstream is unlikely to be part of the work.
It's important signalling that helps us all cooperate to produce stable software, not just our own software, but the open source ecosystem we (in the best case) cooperatively develop.
It's also just about the only sane way to manage indirect dependencies that might be depended upon by multiple activated things. Widget and Doodad are both used by my project, and both depend on Button. How do we figure out what version(s) of Button will be (are intended to be, sans bugs) compatible with both of them?
And I definitely don't read release notes of all my dependencies, even indirect ones, every time I update any. I lock to major versions (and hope my dependencies do to for indirect dependencies), and only look at release notes if an update breaks a build.
I like Ubuntu when it uses its CalVer versions, but if I have to look for a specific package, I still have to check which one Xenial is after looking at the apt sources.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Search:
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK