6

Complete rewrite of ESLint · Discussion #16557 · eslint/eslint · GitHub

 1 year ago
source link: https://github.com/eslint/eslint/discussions/16557
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Complete rewrite of ESLint #16557

Unanswered

nzakas asked this question in Ideas

Introduction

ESLint was first released in 2013, meaning it will be ten years old next year. During that time, the way people write JavaScript has changed dramatically and we have been using the incremental approach to updating ESLint. This has served us well, as we've been able to keep up with changes fairly quickly while building off the same basic core as in 2013. However, I don't believe continually to make incremental changes will get us to where ESLint needs to go if it wants to be around in another ten years.

Even though we are close to rolling out the new config system, which is the first significant rearchitecture we've done, that effort is what led me to believe that it's time for a larger rewrite. We are still stuck on implementing things like async parsers and rules because it's difficult to plot a path forward that doesn't cause a lot of pain for a lot of users. This seems like the right time to stop and take stock of where we are and where we want to go.

Goals

I've been thinking about where I'd like ESLint to go next and have come up with several goals. These are pretty abstract at the moment, but here they are, in no particular order:

  1. Completely new codebase. Starting with a completely new repo will allow us to continue to maintain the current version of ESLint as long as necessary while ensuring we are making non-breaking changes on a new version.
  2. ESM with type checking. I don't want to rewrite in TypeScript, because I believe the core of ESLint should be vanilla JS, but I do think rewriting from scratch allows us to write in ESM and also use tsc with JSDoc comments to type check the project. This includes publishing type definitions in the packages.
  3. Runtime agnostic. ESLint should be able to run in any runtime, whether Node.js, Deno, the browser, or other. I'd like to focus on creating a core package (@eslint/core) that is runtime agnostic and then runtime specific packages (@eslint/node, @eslint/browser, etc.) that have any additional functionality needed for any given runtime. Yes, that means an officially supported browser version!
  4. Language agnostic. There's nothing about the core of ESLint that needs to be JavaScript specific. Calculating configurations, implementing rules, etc., are all pretty generic, so I'd like to pull the JavaScript-specific functionality out of the core and make it a plugin. Maybe @eslint/js? I envision a language implementation being distributed in a plugin that users can then assign to specific file patterns. (This would replace the parserForESLint() hack.) So ESLint could be used to lint any file format so long as someone as implemented an ESLint language API for it.
  5. New public APIs. Our public API right now is a pretty messy thanks to the incremental approach we've taken over the years. ESLint was never envisioned to have a public API beyond the Linter class (which started out as a linter object) and we've continued hacking on this. Right now we have both an ESLint class and a Linter class, which is confusing and they both do a lot more than just lint. I'd like to completely rethink the public API and provide both high-level APIs suitable for building things like StandardJS and the VSCode plugin and low-level APIs that adhere to the single-responsibility principal to make it possible to do more creative mixing and matching.
  6. Rust-based replacements. Once we have a more well-defined API, we may be able to swap out pieces into Rust-based alternatives for performance. This could look like creating NAPI modules written in Rust for Node.js, writing in Rust and compiling to WebAssembly, creating a standalone ESLint executable written in Rust that calls into the JavaScript portions, or other approaches.
  7. Async all the way down. Async parsing, rules...everything! We've had trouble making incremental progress with this, but building from scratch we can just make it work the way we want.
  8. Pluggable source code formatting. Stylistic rules are a pain, so I'd like to include source code formatting as a separate feature. And because it's ESLint, this feature should be pluggable, so you can even just plug-in Prettier to fulfill that role if you want.
  9. Reporters for output. The current formatters paradigm is limited: we can only have one at a time, we can't stream results as they complete, etc. I'd like to switch to a reporters model similar to what Mocha and Jest have.
  10. AST mutations for autofixing. This is something we've wanted for a long time. I see it as being in addition to the current text editing autofixes and not a direct replacement.

Maybes

These are some ideas that aren't fully hatched in my mind and I'm not sure how we might go about implementing them or even if they are good ideas, but they are worth exploring.

  • Make ESLint type-aware. This seems to be something we keep banging our heads against -- we just don't have any way of knowing what type of value a variable contains. If we knew that, we'd be able to catch a lot more errors. Maybe we could find a way to consume TypeScript data for this?
  • Make ESLint project-aware. More and more we are seeing people wanting to have some insights into the surrounding project and not just individual files. typescript-eslint and eslint-plugin-import both work on more than one file to get a complete picture of the project. Figuring out how this might work in the core seems worthwhile to explore.
  • Standalone ESLint executable. With Rust's ability to call into JavaScript, it might be worth exploring whether or not we could create a standalone ESLint executable that can be distributed without the need to install a separate runtime. Deno also has the ability to compile JavaScript into a standalone executable, so Rust isn't even required to try this.

Approach

For whatever we decide, the overall approach would be to start small and not try to have 100% compatibility with the current ESLint right off the bat. I think we'd added a lot of features that maybe aren't used as much, and so we would focus on getting the core experience right before adding, for example, every existing command line option.

Next steps

This obviously isn't a complete proposal. There would need to be a (massive) RFC taking into account all of the goals and ideas people have for the next generation of ESLint. My intent here is just to start the conversation rolling, solicit feedback from the team and community about what they'd like to see, and then figure out how to move forward from there.

This list is by no means exhaustive. This is the place to add all of your crazy wishlist items for ESLint's future because doing a complete rewrite means taking into account things we can't even consider now.

You must be logged in to vote

Replies: 28 suggested answers · 82 replies

What about AST-based autofixing? That's the sole advantage prettier seems to have over eslint.

You must be logged in to vote
0 replies

@ljharb Oh yes, forgot to include that.

You must be logged in to vote
0 replies

Very exciting....

One big picture kind of feature I'd like to see as possible would be richer AST traversal too to ensure not only parent, but parentProperty, nextSibling, etc. on each supplied node, as well as a scoped querySelector (ala esquery).

You must be logged in to vote
1 reply

I don't want to muddy up the AST with more nonstandard properties (I'd love to actually get rid of parent), but that doesn't mean we can't think of other ways to traverse the AST.

  1. ESM with type checking. I don't want to rewrite in TypeScript

Valid preference, but would there be any chance of providing types within ESLint rather than in the separate @types package?
Currently I'm scratching my head over the flat configs as there's no type definition for them, however the options use existing type declarations. I'm trying to figure out whether to declare my own interface or to modify an existing one.

It could be useful for the large number of TS users if features had their TS declarations added at the same time they are released.
The @types package is at v8.2, 24 minor versions behind, which is generally a pain for TS users as things are hidden / throw compiler errors despite actually existing.

Additionally, given the desire to handle types and be type checked (all via typescript) it seems odd to not do the final little bit and simply translate JSDoc typing into TypeScript typing. In my experience I've found TS to be a lot leaner than JSDoc for declaring types, and it's also now the driver behind JSDoc types and inference.
The desire is to stay away from typescript, but also fully integrate with typescript. I understand the difference, but given the desire to support typescript features inherently, why not also support "here's a typedef so you don't have to flick back and fourth from editor to docsite"?

You must be logged in to vote
14 replies

I'm not sure that this is a good decision as I've yet to see any library that recently released that didn't eventually end up doing a TS rewrite.

That's a good point. If every other tool rewrite last years is done in TypeScript (and not JavaScript), maybe there's a reason and rationale behind this?

Mixing types with JS makes it quite unreadable and it only gets worse as you go into complex logic.

I said the same thing until a year ago. Then someone finally convinced me to give TypeScript a serious a try. It turns out the type annotations are not nearly as noisy as I expected. Give it a shot!

I don't think the differences are as drastic as some describe, having contributed to both TS and JS libraries. I've had meh experiences with both. One one hand, TS packages always have type-checking which is great. JS ones, have no expectation of this. But we shouldn't equally compare the worst of JS packages with the best of TS. I mostly resolved to JS+JSDocs but have two key enforcements that I don't code without:

  • jsdoc/require-jsdoc: All functions should have JSDocs for the purpose of documentation. Regardless of what language you write there should be an area for better explaining function and parameters. I've noticed that enforcing this rule improves developers putting useful notes and examples. This is applicable to TS code or JS code, but if you're writing in TS and have to put documentation, then you're now writing each function parameter twice. Since Typescript itself does not read JSDocs (odd decision), it means you have to write the type in the function as well as the JSDoc. With JS, because of this enforcement, it ends up being less code to write.

  • "noImplicitAny": true. The lack of this option is probably the most frustrating aspect of working with JS packages. It's amazing how sometimes just enabling this rule in a repo will expose thousands of TS errors. It's just not a sound development decision to not have this enabled. I can imagine a lot of other's bad experiences also stem from this.

There's obvious some more you can add on, but I'd worry about a JS project without these. If you're not keen on enforcing documentation, (first of all, why?), now you're writing more JS code and wrapping everything in JSDocs. Just write it in TS. Save your sanity.

But I will say one of the major frustrations of working with importing packages that are in TS is the recompilation issues stemming around distribution and decoupling. This is clearly subjective, and maybe it's more VSCode's fault, but it's a very common source of frustration when I'm trying to work with a package and need to figure out what's going on behind the scenes. VSCode will send me to a typings file which tells me very little about what the function is doing. Trying to get to the source will throw me to a recompilation of the source, which again, forces me to search through source code to figure out what I'm actually calling. If it's compiled to CJS, it points me to not the original source code, but the compiled one with plenty of require remappings . If it's ESM, it'll be the JS code without any typings over JSDocs, so everything will appear in source as any. That means you'll have to jump between the code and typings file to understand, or now leave your environment to GitHub to read the real source. With JS+JSDocs, it's all in one location (for better or for worse). I can see all the types, the code and documentation with one keyboard shortcut.

That said there are some things you must acknowledge with JS+JSDocs:

  • Type casting is very verbose. While one of the benefits is all runtime code is clearly abstracted from typing (because they're in comments), being able to use as is just way shorter. I'll admit, I'm sometimes just lazy and call ts-ignore instead of trying to wrangle the syntax.

  • You can't reliably write real JSDocs. There's so much that TS has going for it that you will end up adopting TS specific things like Partial<T> and extends. These don't translate to other packages if you feel like you're going, say, use JSDocs to build documentation. If you want to multi-line @typedef, that's also TS specific. That also goes both ways, TS doesn't 100% parse JSDocs. @yield is still unsupported. Also you can't escape variable names like x5t#S256 in JOSE.

  • Typescript will not always automatically parse JSDocs. I'm not sure if there's something missing in a configuration somewhere, but I've worked with JS+JSDocs projects that end up with unresolved types when you inspect the source externally. I don't know if there's a way to fix this, but it's something common I've seen. VSCode doesn't show the @example tooltips either sometimes. If I had to guess, generated d.ts files strip documentation and override source .js files. Again, might be a packaging mistake.

Likewise, there are some common misconceptions related to JSDocs that need to be clarified:

  • You can import a d.ts from any JS file. While the JSDocs is limited, you can easily spin up a d.ts file and write a type, then import it in JS. I've needed to do this only a handful of times, but it works just fine. All instances where I had to do this related to recursive types (eg: export type TupleTree<T> = Iterable<[T, T|TupleTree<T>]>;). I like keeping my types next to code, but if you want to build an entire separate, d.ts file you can. Also, you don't need to call @typedef {import... if it's a pure definition file. Typescript will pick up all the types automatically.

  • You do not have to set the type on most variables. It's actually rare to have to call @type inside a function if you enforce JSDocs. Lazy initialization (let foo;) sometimes need it, but TS will infer the type automatically in most cases. Same also applies with @return. no-implicit-any makes writing that JSDoc line optional since TS can infer it. I only add it when I'm being strict.

For real world examples that I've worked with, I can list three similar projects: moment, date-fns, and date-fns-tz. moment being a JS+Typings sends me to a one-liner require when trying to inspect source. I have to go to the GitHub page. date-fns is a TS project and will send me to a compiled JS file, and while it's more understandable than moment's, it's still mangled. date-fns-tz is a JS+JSDocs and trying to work with it is clear and simple. All the source file is untouched, and I never have to look beyond my node_modules structure.

As with anything, there are pros and cons, but you have to decide what works best for you, and what you are willing to maintain. There's no point in frustrating yourself on something because you will eventually stop caring about supporting. Whatever you chose, note the cons and reach out for help if you can't find a way to workaround them. Also, know your audience. Who is going to be inspecting your source? Who will be contributing code? You, at the same time, don't want to something hard to work with for your target audience.

I am a maintainer on a large project which initially adopted JSDoc + tsc, until the codebase was fully typed, but has recently started switching to native TS. Our reasons for doing so were:

  1. Friendliness to new contributors. Most documentation about how to specify types that tsc understands use native TS syntax, and figuring out how to express the same thing with the JSDoc syntax adds an extra layer of indirection. Ironically this was one of the reasons for using JSDoc initially.
  2. Better tooling for native TS syntax than the equivalent JSDoc. Prettier for example can format TS out of the box, but ignores JSDoc.
  3. The issue mentioned in the previous comment about certain constructs being rather verbose, mainly type-casting

I do think there is significant value in having code which can be run directly in the browser or Node without any processing though. For build tools etc. which run in Node, we still use JSDoc.

Mixing types with JS makes it quite unreadable and it only gets worse as you go into complex logic.

I said the same thing until a year ago. Then someone finally convinced me to give TypeScript a serious a try. It turns out the type annotations are not nearly as noisy as I expected. Give it a shot!

I don't really have a choice of the codebase I have to deal with. So I gave TS more than a try. Some folks are going religious about it, but at the end of day, your perceptions and experience belong to you. There is no doubt in my mind that I prefer to read code with JSDoc above the function code, than TS annotations mixed within the function code.

In the same way, I am not a fan of CSS-in-JS libraries like emotion, because all these styles coding make the component logic and the overall picture harder to grab. For that reason I prefer Svelte/Vue (single-file components or not) to React. There too, I am sure you will find a crowd starting a fight out of their own perceptions and experience over which is best for what.

This is a not a judgement over the value added of typing. I wrote some Elm code in some past projects and that was the most delightful front-end experience I ever had. They got that right. The type system is not as good as Haskell, but it is also far simpler, far more readable, and type annotations are, like in Haskell, separated from code. Elm is at another balance point between full-fledged type systems (which can be used to prove program correctness like Coq) and the complete absence of type system.

When we don't have a choice, we do what we must do. I'll live with TS. I am paid to do so and I gladly oblige. But when I have a choice, JSDocs works for me. Not as expressive as but expressive enough. Yet another balance point. To be used in conjunction with a good testing strategy (type safety is not safety).

@clshortfuse I enjoyed reading the summary of your experiences with very concrete examples - that is helpful. More importantly, I hadn't thought about the impact on the maintainer who has to make the choice. It is true that the two/three people who contributes 90% of the code (not unusual in OSS project) should give themselves a greater say in what technology to use. You said it right, you also need to listen to your target audience. But in the end the ones who will be mostly working the codebase are the maintainers. So there again it is a balance to find between conflicting needs.

Providing types in a package is a mistake; it conflates semver of the types with semver of the actual API.

You must be logged in to vote
9 replies

Providing types in a package is a mistake; it conflates semver of the types with semver of the actual API.

Is there any way you can elaborate on this being an issue?

In practice, haven't seen any real issue here. When the types update, it uncommon for it not to be a signal of API changes as well.

@nzakas there are many reasons keeping types in sync is a moving target - especially that every TS minor release is likely to contain breaking changes. Also, the TS team keeps DT types up to date for you (syntax-wise, not API-wise, ofc), so that's an added bonus.

So are you talking about TS syntax changes?

I generate types at build times for a lot of my personal projects, so I’m just not clear on what the dangers are that you’re referring too.

Yes, I’m talking about changes in the build output, and the runtime behavior, and also in the type system itself. eslint will need to maintain compatibility with a lot of TS versions to avoid frequent majors, and that is very difficult to do (which is why the TS core team takes it upon themselves in DT, because it’s too hard for everyone else to figure out)

I would like to refute your assertion that it's difficult to support TS versions.

TypeScript-ESLint has been shipping our own types for ESLint for years whilst also internally keeping updated to the latest TS version.

We haven't bumped our minimum TS version in a long time - currently have support for as far back as TS 3.3 (released FOUR years ago).

TS releases are not an issue to support into the past. There are tools like downlevel-dts which you use to allow you to write the latest syntax and ship support for older versions along side new syntax (via package.json typeVersions).

We are about to bump our minimum TS version in our next major. Though not because of type pains! Instead because we are built directly on top of TS - so by bumping the minimum we can delete some logic we maintained for old versions of their parser tooling to reduce some maintenance burden.


Additionally TS compilation over versions is stable, or changes are explicitly called out in released. I don't remember there being one in a while though.
Regardless we have never had a breakage in our project due to a runtime change from a build output change.

I would love love love if ESLint is written in Rust. In larger codes, it's noticeable a delay on save and on Intellisense. This is definitively the future. While I love JS/TS and work with it, there is really no performance comparison with Rust. It's more of a matter of time until we have a ready JS/TS linter in Rust, not if it's going to happen. There was https://github.com/rslint/rslint but it looks dead already.

Also, I believe it could have a better structure to support formatters to overcome this situation: https://typescript-eslint.io/docs/linting/troubleshooting/formatting. So instead of adding prettier or dprint, we could finally have one proper tool for both linting and formatting.

You must be logged in to vote
6 replies

Somehow I also feel like Rust could be the better option in the long run. It's just really early days but I already see that many tools are moving to Rust for performance reasons.

Beside that: ESLint, Rome and Prettier always had one big issue in my opinion => They try to be JS/TS first.
At work we have e.g. projects using https://github.com/HubSpot/prettier-maven-plugin
And yes, under the hood it uses prettier https://github.com/jhipster/prettier-java/blob/2e0e0da2a288068c91d8f4c6133eedc4b7ce23ca/packages/prettier-plugin-java/package.json#L14
But Prettier ships with many stuff for JS out of the box which is just not used at the end, as there is no single line of JS code in our project(s).
And I highly assume ESLint and Rome is like currently the same.

I would love to use one single tool to lint everything but this tool only use what it needs and also is freaking fast. Not 20ms per file but 20µs per file would be a huge benefit rocket


Another point of moving to Rust would be that slowly (at first) the whole parser / AST generators can be written in Rust which effects the whole ecosystem in terms of speed in the long run.

I had a thought while reading this thread and wanted to introduce an option:

This may be one of those situations where there is an opportunity to unite a community by saying the next ESLint version will be Rome (in a sense). Basically: join Rome.

Rome may not be perfect, or may have some concepts you would not agree with, but I feel that could be overlooked in preference of the "greater good". I believe if ESLint puts its backing behind Rome, it may start a domino effect of movement.

What does that get the community?

  • Easier decision making when starting a project
  • Easier decision making on API's / config
  • Easier movement for community (lets say: JS incorporates typescript as first class citizen. Single project community can rally around and make sure migrates to that new language feature)
  • Combining of talents: Rome is already written in rust, and would mean less barrier to considering that option

Mostly the unification of the community is why I feel this may be a good idea. Intirested in your thoughts!

I often thought about this myself, but I have my issues with Rome and their maintainer(s).
I already asked multiple times (on bird-site) if they could guide me the way of how to provide e.g. a pug plugin, but they totally ignored me.
I also looked into their docs and it feels like they want to rule everything. Also a goal of Rome is to format, lint and even bundle code but mainly JS/TS in the first place.
So also this doesn't match with my dream-linter tool.
I just want one linting tool that can lint everything on demand and only what I tell it to lint and with my configuration. Not formatting and not bundling. I already have other tools for formatting and bundling.

Thanks for the insights. I'm very well aware of Rome and I don't see ESLint and Rome being a good fit to work together. Rome is all-Rust, which gives it speed but also limits the extensibility. We want to continue to support the existing (large) ESLint ecosystem, which means some parts will need to still be written in JavaScript. Plus, I don't think we want to rely on a profit-seeking startup vs. having a community-driven project like ESLint. What happens if the startup fails?

On the Rust side, I think we will definitely look at rewriting parts of ESLint in Rust. We have some folks looking at that right now. Will we go all the way with Rust? I don't think so. I just don't think we can do that without instantly making all the custom rules and plugins people have made obsolete, and then it's like starting up any other random project where people will need to get things set up again.

Never say never, but that's my current thinking. There will be RFCs when we have some more definitive ideas.

Rome is VC-funded if I understood well. While it sounds great to join effort, as I read in a previous post. That means that down the road the direction and governance of the project will be influenced in different ways that the current maintainers may wish.

Rust is great and fast, and also as of today, it carries even more cognitive overhead when it comes to contributing... For a project like eslint where contributed plugins make a good part of its value, Rust is a suboptimal choice I think.

@willster277

Valid preference, but would there be any chance of providing types within ESLint rather than in the separate @types package?
Currently I'm scratching my head over the flat configs as there's no type definition for them, however the options use existing type declarations. I'm trying to figure out whether to declare my own interface or to modify an existing one.

Yes, that was my intent. I'll update the original text to indicate that.

You must be logged in to vote
0 replies

@JoshuaKGoldberg @bradzacher If you have time, I'd love to hear what we could change about ESLint to make working with TypeScript easier.

You must be logged in to vote
2 replies

I would very much suggest unifying the parser, so that it doesn’t matter if someone uses JavaScript or TypeScript. They’ll use the same parser and the same rules.

In other words: TypeScript support should be first class, or it never gets the ecosystem buy-in to be truly frictionless, and you’ll be back at the situation of today.

As a user of TypeScript and ESLint, it would be nice not to have to add additional TypeScript-specific packages from a different project, in order to use ESLint with TypeScript. Aside from requiring extra dependencies and configuration, @typescript-eslint appears to have some philosophical differences from ESLint in that its "recommended" set of lints IMO go beyond finding issues that are highly likely to lead to errors and into more subjective territory.

Providing types in a package is a mistake; it conflates semver of the types with semver of the actual API.

@ljharb I would say the types and the API are the same thing, if you take statically typed languages, it's usually impossible to separate the concepts simply because of the nature of static typing.
The API as you call it is less the interface and more the hidden magic, while the types are the interface; what to put in, what to expect out. Types are API.

Even if no types have changed, and only internal code is different, it's a good idea to maintain parity between the version tacked to the types and the version tacked to the API. "types v1.2.1" to me implies that it doesn't include any new types which may be necessary for "API ^v1.2.2".

In JS we can spilt types and API into utterly separate objects, I believe it's a bad idea to do that.

You must be logged in to vote
0 replies

@willster277 bumping the major version because you renamed a type is correct, but is a very high cost to impose on normal JavaScript consumers who have to read the changelog to discover that they're not actually affected at all.

You must be logged in to vote
7 replies

Anecdotally I personally don't recall seeing a non-type-only package major bump purely for type changes unless they're major bumps.
It's rare that you are able to make breaking changes to your type API surface without also making breaking changes to your runtime API surface as they're usually pretty tightly coupled.

I’ve seen it frequently - just renaming an exported argument type, for example - which has no direct JS representation - would cause this.

I still feel that's more of an issue with how the specific package is maintained, rather than a disadvantage to bundling types. As mentioned elsewhere in this thread, there are already statically typed languages that have in depth support for packages and updates.

@willster277 bumping the major version because you renamed a type is correct, but is a very high cost to impose on normal JavaScript consumers who have to read the changelog to discover that they're not actually affected at all.

This is given without much elaboration, can you elaborate on how this is a high cost? Also, is it actually a requirement to bump major due to types?

I just haven't seen this as an issue in practice, even at large companies (bigger than Airbnb even!) I've worked at down to smaller ones.

@ScottAwesome I agree. I've never found myself in a situation where I need to rename a type and make a breaking change.
Even if I had, I'd put it on the backlog until there were enough other "we really want to break this" things to warrant a breaking change.

The blindingly obvious thing to do is to do the exact same thing we do when we break our Javacript.

Before:

type OldNameIDontLike = /* ... */;

After:

/**
 * @deprecated use {@link NewName} instead
 */
 type OldNameIDontLike = NewName;
 
 /**
 * @since vX.Y.Z - replaces `OldNameIDontLike`
 */
type NewName = /* ... */;

If the content of the type has a breaking change, then simply keep the old definition under @deprecated and make the breaking change to the content within NewName.

Changes with Types work the exact same way and pose the exact same issues and have the exact same mitigations as changes with functional JS, with the key difference being the changes with Types are almost always a side effect of a change with functional JS, meaning you would be doing this regardless of your use of TypeScript.

HOOOOO BOY. There's a lot to talk about here.

I've got a version of this written up already (typescript-eslint/typescript-eslint#5845 (comment)) but I've copied it here so that I can add more context

It's worth noting that a lot of the problems we run into with type-aware linting also apply in some degree to eslint-plugin-import which does its own out-of-band parsing and caching.


ESLint is currently designed to be a stateless, single-file linter. It and the ecosystem of "API consumers" (tools that build on top of their API - IDEs, CLI tools, etc) assume this to be true and optimise based on the assumption. For most parsers (@babel/eslint-parser, vue-eslint-parser, etc) this holds true - they parse a file and forget about it, and for our parser (@typescript-eslint/parser) in non-type aware mode this also holds true. However when instructed to use type information, our parser now breaks both assumptions - it now stores stateful, cross-file information.

Type-aware linting, unfortunately, doesn't fit too well into the ESLint model as it's currently designed - so we've had to implement a number of workarounds to make it fit - we've fit a square peg into a round hole by cutting the edges of the hole. This, as you can imagine, means there are a number of edge-cases where things can get funky.

ESLint Usecases

ESLint is used by end users in one of three ways:

  1. "One and done" lint runs - primarily done by using eslint folder or similar on your CLI. In this style of run each file is parsed and linted exactly once.
  2. "One and done, with fixers" lint runs - primarily done using eslint folder --fix. In this style of run most files are parsed and linted exactly once, except those that have fixable lint errors that are parsed and linted up to 11 times.
  3. "Continuous" runs - primarily done via IDEs. In this style of run each file can be parsed and linted 0..n times.

For a stateless, single-file system - all 3 cases can be treated the same! In that style of system when linting File A you don't ever care if File B changes because the contents of File B have zero impact on the lint results for File A.
However for a stateful, cross-file system each case needs its own, unique handling. For performance reasons we cache the "TypeScript Program" (ts.Program) once we've created it for a specific tsconfig because it's super expensive to create - so we are storing a cache that needs to correctly react to the state of the project.

Caching

These are the caching strategies that we can use for each usecase. Note that each usecase affords a different caching strategy!

  1. "One and done" runs have a fixed cache - we can assume that file contents are constant and thus that the type information is constant throughout the run.
  2. "One and done, with fixers" runs mostly have a fixed cache, except for those files that get fixed, but as fixers "should not break the build", we assume that the fixed file contents won't change the types of other files.
    • This is a slightly unsafe assumption, but the alternative is to treat this case exactly the same as the "continuous" case, which means we hugely impact performance.
    • This assumption allows us to re-check a subset of the project (just the fixed file and its dependencies) with the slower builder API, rather than switching the entire run to the slower builder API - which obviously allows us to remain fast.
  3. "Continuous" runs are the wild wild west. The cache has to be truly reactive as anything can change at any time and any change can impact any and all types in other files.
    • Note that by "anything can change at any time", I really do mean anything. Files and folders can be created, deleted, moved, renamed, changed at the whim of the user, and most of those changes occur outside of the lint run (mentioned in more detail below)

TypeScript

TypeScript's consumer API is built around the concept of "Programs". A program is esentially a set of files, their ASTs, and their types. For us a program is derived from a tsconfig (eg the user tells us the configs and we ask TS to create a program from the config).

A Program is designed to be immutable - there's no direct way to update it.
To perform updates to a Program, TS exposes another API called a "Builder Program" which allows you to inform TS of changes to files so that it can internally make the appropriate updates to the Program.
The builder Program API is much slower for all operations than the immutable Program API - so where possible we want to use the immutable API for performance reasons and only rely on the builder API when absolutely required.

So to line it up with the aforementioned usecases - we want use the immutable API for (1) and most of (2), and fall back to the builder API when a file is fixed in (2), then (3) always uses the builder API.

ESLint's API

ESLint implements one unified API for a consumer to perform a lint run on 1..n files - the ESLint class.

There are no flags or config options that control how this class must be used by consumers. This means that ESLint cannot distinguish between the above usecases. This makes sense from ESLint's POV - why would it care when it's a stateless and single-file system; all the cases are the same to them!

This poses a problem for us though because if ESLint can't distinguish the cases, then we can't distinguish the cases and so we're left with the complex problem of "how can we implement different cache strategies without being able to tell which strategy to use?"

Problems

Cache Strategy and Codepath Selection

As mentioned above, we want to use the immutable Program API where possible as it's so much faster. We do this automatically by inferring whether or not you've run ESLint from the CLI by inspecting the environment. It's a hack, but it does work for usecase (1). Unfortunately there's no way for us to differentiate usecase (1) from (2), so we have to have a fallback to switch to the builder Program for usecase (2) so that we can update the Program after a fix is applied.
If our detection code doesn't fire, we just assume we're in usecase (3), and use the slow but safe codepaths.

Slow lint runs often occur due to incorrect usecase detection due to the user running things in ways we didn't expect / can't detect (such as custom scripts), or due to cases we haven't handled.

Disk Watchers

Ideally we'd attach filewatchers to the disk to detect when the relevant files/folders are changed (would solve the "out-of-editor file updates" problem below).
Unfortunately there's no good way to attach a watcher without creating an "open file handle". In case you don't know - open file handles are a huge problem because NodeJS will not exit whilst there are open file handles. Simply put - if we attach watchers and don't detach them then CLI lint runs will just never exit - it'll look like the process has stalled and you have to ctrl+c to quit them.

There is no lifecycle API built into ESLint so we can't tell when would be a good time to clean up watchers. And because we can't tell the difference between an IDE and a CLI run, we can't make assumptions and attach watchers either.
So ultimately we just can't use watchers! Thus our only option is to rely on the information ESLint tells us - which is just going to be information about what file is currently being linted - and hope that is enough information.

Live File Updates

This is only a problem for usecase (3). When you make a change to file A in the IDE, the IDE extension schedules a new lint run with the contents from the editor which we use to update the Program. If you have file B that depends on the types from file A, this means that we've also implicitly recalculated the type for file B.
However, the extension controls lint runs - so we cannot trigger a new lint run on file B. This means that file B will show stale lint type-aware errors until the IDE schedules a new lint run on it.

Single-threaded vs Multi-threaded linting

The implicit update of file B's types based on changes to file A assume that both file A and B are linted in the same thread. If the aren't linted in the same thread, then updates to file A will not be updated in file B's thread, and thus file B will never have the correctly updated types for file A - which leads to incorrect lints!! The only way to fix this would be by restarting the IDE extension (or the IDE itself!)

Out-of-editor File Updates

In all IDEs it's possible that you can use the "file explorer" to move files around to different folders, or even rename files. This disk change happens outside of the editor window, and thus no IDE extension can or will tell ESLint that such a change occurred. This is a big problem for us because the Program state explicitly depends on the filesystem state!

We have some very slow fallback codepaths for this case that attempts to determine if out-of-editor changes occurred on disk, but it's not perfect code and can miss cases.


So with all that being said... what would I want to see from a rewritten version of ESLint?

Well the biggest problem we have is that we cannot tell what state ESLint is running in, so we have to rely on fuzzy and unsound logic in order to determine cache strategies.

So I'd really want ESLint to be able to tell parsers and plugins about the state ESLint is running in so that they can make decisions about how to invalidate or update their data-stores.
I suspect this means that ESLint will need to have more than one API for consumers instead of the single ESLint API it exposes - but that's something that can be nutted out later? Hard to say.

Worth mentioning this is something I've been thinking about for a long while (eg #13525), but obviously haven't had the time to do any formal design or RFCs.

You must be logged in to vote
8 replies

Passing this to the parser is actually what we've been off-and-on talking about sending as an RFC to ESLint for some time! It's what I've been planning on working on once our v6 is out the door.

@nzakas absolutely that would help; as long as individual rules could register their own listeners. If the identify of context.session remained consistent, that could be a WeakMap key for anything that needs caching per-session :-)

@bradzacher @JoshuaKGoldberg for clarification: you’re saying the session info is needed not just in rules and plugins, but also in the parser? Can you explain more about that?

@ljharb ah that’s interesting! I hadn’t thought about exposing these events within rules. Can you explain more about how you think that would work? My initial thought was that plugin-level hooks would the best use but definitely want to hear more.

@nzakas essentially, let's say I have 20 rules in the import plugin, but only 12 of them need the cached dependency graph. If the other 8 are enabled, the dep graph shouldn't be gathered - but if any of those 12 are, then prior to any rule running (and the timing info for any rule begun to be measured), i'd want a "pre-lint" step where i could gather the dep graph and cache it, and then all of the rules that need it would have it available.

Even better is if all the rules that don't need this cache could run first - ideally in parallel - so that rules aren't unnecessarily delayed.

The reason to use a per-session cache is long-running use cases like eslint_d, or in an editor, or jest-eslint-runner, etc.

for clarification: you’re saying the session info is needed not just in rules and plugins, but also in the parser? Can you explain more about that?

yes - type information is generated in the parser.
the workflow is essentially:

  1. eslint asks @typescript-eslint/parser to parse
  2. @typescript-eslint/parser uses @typescript-eslint/typescript-estree under the hood
  3. @typescript-eslint/typescript-estree inspects the parserOptions to determine whether or not type-aware linting was requested (specifically looking for parserOptions.project to be set)
    • if set, @typescript-eslint/typescript-estree coordinates TS to create a ts.Program - the data structure which includes all information about a tsconfig.json and from which type information can be interrogated. The program is returned as part of the parser services.
    • if not set, @typescript-eslint/typescript-estree just uses TS to parse the singular file and nothing more.
  4. ESLint coordinates the lint rules
  5. If a rule wants to use type information it accesses the ts.Program from the parser services, and consumes them (details not important for this discussion)

Our requirement isn't at the plugin level - lint rules just use the generated data structure.
Our requirement is at the parser level where we do all of the work up-front to create the backing data structures to power type-aware linting.

Language agnostic

If this is intended to be used for any language -either specifically in the web ecosystem or more broadly-, why not rewrite in a more performant language -- e.g. Rust?

You must be logged in to vote
6 replies

I would love to see markdown support. I would be a good complement to prettier. See the rules in https://github.com/DavidAnson/markdownlint.

eslint-plugin-markdown

This seems to lint JavaScript inside markdown. I want to lint markdown.

It's possible to support any language with ESLint right now (eg eslint-plugin-relay or eslint-plugin-graphql lint graphql fragments and .graphql files), there are just certain constraints the parsers need to work within to produce an AST that ESLint won't crash on.

@oliviertassinari there are things like markdown-eslint-parser and eslint-plugin-md but as you can see from the download counts - in general people will just get behind tools that are built for purpose (like markdownlint or remarklint).

To clarify: a lot of these other tools copy ESLint mechanisms to bootstrap their own linting. The idea here is to create a core that is general purpose both to make it less difficult for things like graphql-eslint to plug in and also to allow folks to more easily create standalone tools like markdownlint that currently need to reimplement all the boring stuff the ESLint core has.

It is recommended to convert this issue to discussion. We need structured comments.

You must be logged in to vote
1 reply

Done!

@JoshuaKGoldberg a major downside of writing any JS tool in "not javascript" is the dramatically decreased pool of potential contributors.

You must be logged in to vote
8 replies

But with Rust's ability to call out to JavaScript (as mentioned in the OP), wouldn't there still be the option to do the same thing in JavaScript?

It will still be "not JS" because of the extra JSDoc syntax that you'll be introducing (which is worse than TypeScript, having tried this out).

So better to introduce a new language that has a better syntax, more features and a larger developer community (TypeScript) than one that is worse on all of those metrics (JSDoc)

@JoshuaKGoldberg

So just confirming - although ESLint will not be JS-specific, the idea is that it still is a JS-first tool? E.g. you might be able to use it for non-JS code such as Markdown or even any arbitrary language, but the focus will still be on JS/TS & adjacent web languages?

Sort of. The idea is that ESLint will be targeted primarily at the web development ecosystem. The ESLint team will still maintain JS-specific functionality and possibly some others (JSON seems like an obvious one?). JS still seems like the best language for plugins and custom rules, so likely that the core will remain written in JS. But we really want to encourage others to create plugins for other related languages like CSS, Markdown, TOML, YAML...anything that is typically found in web development.

I'm under the impression that if someone wanted to write a plugin in JS and have it be called by Rust, they could.

Yes, this is possible, however, you pay a penalty every time you cross over the Rust-JS boundary. If you think about how rules work, where you create an object that then visits nodes in the AST, it's not a very clear line between Rust and JS. Let's say the AST lives in Rust, that means for every visitor function we call, we'd end up serializing the AST node from Rust to pass into JS. That's some non-trivial overhead. So, that likely means the AST needs to stay in JS to avoid paying that cost. And if the AST stays in JS, there are likely other things that need to stay in JS to avoid crossing the boundary multiple times. So, short answer: this is more complex than it seems. If ESLint plugins just provided data without functionality, we would have more options, but as it stands, a complete Rust rewrite seems not to be the best approach. That said, using Rust and embedding Deno allows us to start with most of the core in JS and then to slowly try to replace pieces with Rust to see what will work and what the performance implications are.

And as @ljharb mentioned, a complete switch to Rust would limit contributions. Having spent the past month learning Rust, I can tell you it is very intimidating and often very frustrating. Given that we are targeting the webdev ecosystem, JavaScript is a much softer landing point for contributions.

Yes, this is possible, however, you pay a penalty every time you cross over the Rust-JS boundary.

So, that likely means the AST needs to stay in JS to avoid paying that cost.

I agree with these statements but it makes me wonder, if the parts that handle the AST (by that I understand parsing and linting). What are the other pieces that could be written in Rust and would not have to pay this boundary-crossing cost ?

I've seen many projects talk about having parts using Rust, other languages, either binaries or WASM that were disappointed of the performance wins from the native parts as they were offset by the serialization/deserialization cost when crossing the boundaries.

Here is for example the discussions that happened for SWC:

On another note, Parcel made an interesting use of SharedArrayBuffers to be able to share maps across threads, since SharedArrayBuffers are also available in Rust, I wonder if that could help with serialization/deserialization issues (disclaimer: I don't have any significant experience myself with WASM or Rust)
parcel-bundler/parcel#6922

It’s not about avoiding the boundary cost, but minimizing it. For instance, a lot of the CLI bootstrapping can easily be done in Rust and be a lot faster than in JS: searching the file system, reading files, etc. Then we can just pass those strings into JS to work with.

There are a lot of possibilities and a lot of existing approaches to review. Right now we are just capturing ideas, so I can’t be more specific than that.

Make ESLint type-aware ... TypeScript

One potential-maybe problem is linking the concept of type awareness to TypeScript specifically. I worry that the community is starting to enable typescript-eslint's APIs in shared packages so much that we're making extracting ourselves from TypeScript difficult. TypeScript has issues in its control-flow analysis that are only likely to be solved by a native-speed equivalent. We're starting to see early-stage TypeScript competitors pop up, such as Ezno and stc.

One path ESLint could go down is:

  1. Adjust the core structure to allow for type-aware linting via a plugin (i.e. @bradzacher's comments here: Complete rewrite of ESLint #16557 (comment))
  2. Create a standardized API around type comparisons that plugins can plug into (i.e. Proposal: Type Relationship API microsoft/TypeScript#9879, but generalized)
You must be logged in to vote
6 replies

honestly - the type annotations proposal is... weird. It's just a proposal for a sort-of-structured place in JS code where type annotations can go, but there's no spec for even basic validation (like, say, track that types reference types, or names reference things that exist), let alone type validation.
So ESLint would have to design, spec and build an entire system around that complexity.

It's also a long, long way off (if it ever actually lands).

I think that making the type-information portion pluggable is a great idea because it means ESLint is agnostic of the type-system, but provides a consistent API that future parsers could build within to allow plugins to opaquely consume type information.
Again though it would require very careful design to ensure that it's providing a truly system-agnostic API.

Yes, my intent was to make type-awareness as generic as possible and not specifically tied to TypeScript. Allowing it to be pluggable would make a lot of sense.

My feeling is that there will likely be some tools developed with non-TypeScript type awareness and it would be good to be able to hook that in. I could see a situation where the core JS plugin has a default type-aware functionality (using JSDoc, most likely) that could be swapped out.

Amusingly, JSDoc types already exist in TypeScript. There might be a lot of time saved by going with TypeScript as a default provider for that system - both in developing the system, and for users needing to set it up.

Yup, that was my intent. But to your other point, figuring out a way to still make that plug gable would be important.

I've been doing pure JS with JSDocs for years now and pretty much any project I do have both eslint-plugin-jsdoc typescript-eslint. But I will say that JSDocs syntax isn't the same as what Typescript reads. I have to disable jsdoc/valid-types because of things not in JSDocs like recursion, Typescript specific interfaces (Partial<T>) and extends. Not having to use @typescript-eslint/parser would be nice.

Sounds great!! One thing I think that can be improved (not necessary related with the reimplementation) is the mess up with the config files in the root of the project. Maybe ESLint could start a new standard of .config folder or similar.

You must be logged in to vote
1 reply

Once the new config system is in place, we are going to stick with it. You an always specify an alternate config file location using -c on the command line if you want to move your config file elsewhere.

I'd like to emphasize the desire for designing the rewrite around improved performance.

The faster feedback loop tools like ESLint offer is already valuable, but it is still fairly performance bound, especially when used in project-level rather than file-level use cases. ESLint's currently slow feedback loop means that file-level lints tend to be preferred (via editor plugins or git hooks), however overall performance could get worse if we add more project-level information to ESLint. Rust and type support could definitely improve this, but I'd like ESLint to be redesigned at its core to encourage performance patterns (like 11ty/Vite/Vitest/etc.).

Additionally, focusing on performance can potentially reduce fragmentation when competing with other tools. For example, the popular formatter Prettier is well supported but fairly slow, and Rome provides a faster Rust-based alternative to ESLint. By providing a fast and extensible platform for AST analysis, we could possibly make faster alternatives to tools like Prettier while including/maintaining extension support.

You must be logged in to vote
1 reply

Yes, performance is a main driver of this rewrite.

As a user (encouraged to share my point of view as one), I am very happy with ESLint! sparkles If it can become faster, and if TypeS(cript) could become first class citizens, I'll take it. Other than that I really have no desires :) Thank you for making all of our code better Nicholas pray

PS as a QoL suggestion just for yourself, perhaps you could reconsider on making ESLint in TypeScript/Rust after all, it will be more expressive and enjoyable to write than if you'd use JavaScript/Rust and sprinkling JSDoc types on top imho. sucrase-node will keep your dev iterations blazingly fast free of heavy build steps. Those build steps you save for CI and npm releases, so that you can still ship that vanilla JS (plus types for free) to consuming devs.

You must be logged in to vote
2 replies

Already mentioned above, but TypeScript is off the table. ESLint needs to work on vanilla JS out of the box, and to do a good job, we need to dogfood ESLint on itself. We'll leave TypeScript-specific functionality to the typescript-eslint folks.

@nzakas obviously you get the ultimate decision on this but for the tool to remain popular and relevant you need to listen to the community. You're almost universally being told TS is the way to do this correctly, please don't dismiss it because it doesn't conform to your original plan or some preconceived idea.

I can't think of any major project that I'm aware of that has switched to typescript and regretted it or switched back. Even Jest ditched Flow for typings because typescript is just that much better for community contributions.

Project lead for JSON Schema here. I know you use JSON Schema implementation ajv. When you come to determining if you still want to use JSON Schema, and what implementation you might use, I'd invite you to come have a discussion with the JSON Schema team/community.
Newer versions of ajv have some issues which we won't go into here.
Newer versions of JSON Schema support far easier extensibility.
IMHO, it's worth investigating if that extensibility may make it easier for plugin authors to define the additional config options they want to have, AND provide auto-complete / intelisense / further information about specific keys and values in a config file.

I'm keeping it short so it's more likely to get read, and I realise it's probably a SMALL element of concern, but given the consideration is a rewrite, now is the time to say something and offer our assistance. =]

You must be logged in to vote
1 reply

Thanks, we can consider this. The big problem we have is compatibility -- if we don't want to force everyone rewrite their existing rules to use a new schema format (which we don't), we have limited options. This is also why we never upgraded ajv once the next major version came out -- it had too many breaking changes that would have caused a headache for the ecosystem.

Amazing to hear this proposal! A lot of new great things here raised_hands

However, another vote for reconsideration of rewriting in pure TypeScript (not JS + JSDoc), because of things that were already mentioned:

  1. Simpler syntax: TypeScript annotations are much simpler in a lot of cases
  2. Features: some things are not possible alone with JS + JSDoc (for these features, some users decide to mix in some extra TypeScript - at that point, you have 3 languages!)
  3. Community: JS + JSDoc has a much smaller community and public documentation (blog posts, official docs, etc). This makes it much more challenging to find information about how to do things.
  4. The "more contributors with JS" argument: you're introducing a new language anyway (JSDoc), and contributors will not be allowed to break types. So either they learn a more complicated syntax with much smaller community, or they learn TypeScript, which is better on those metrics. Also, "more contributors with JS" is becoming less of an argument over time, with TS community size making consistent gains on JS community size.

Saying this after having implemented a medium-size project in JS + JSDoc and regretting it every moment.

You must be logged in to vote
4 replies

As already mentioned in several threads above, TypeScript is off the table. I'm aware of the tradeoffs but we need to dogfood ESLint in vanilla JS.

Just to throw my own two cents, I think the dogfooding argument should actually go towards using TS: while ESLint works very well for raw JS, the way it works for TS isn't as smooth. I would imagine if ESLint was written in TS (and thus was dogfooding it), the integration would be much better (at no cost for JS, I'd also imagine, with JS being essentially a subset of TS).

And I'd also add on top of dogfooding TS, you would get two parsers for the price of one by using the TS parser, as an added bonus there is the reading of JSDoc by the TypeScript parser that could help add rules that understand types (which I seem to understand would be a goal of this rewrite)

@nzakas I already suggested (a bit above somewhere) that eslint should not support first class JS linting but treat it as a plugin for eslint like every other language out there
I have all my projects in TypeScript and even we use prettier for some Java only projects. In both cases we dont need the JS specific overhead eslint comes with.

Instead of dogfooding eslint core with it's own source, the eslint-plugin-javascript should get dogfooded with javascript files in its test-suite!

One other thing that I've been thinking about is the idea of parallel linting and cross file information.

For a purely single file system it doesn't hugely matter how you bucket files together across threads because each file is independent.

However for a system with cross-file information you need to be context-aware in how you bucket files - you want to keep as many related files together as possible so that you don't waste time duplicating work across threads.

Foe typescript-eslint it's super simple to do this bucketing - you do it at the project level. Each tsconfig represents a unit of code that needs to be together or else you need to duplicate the ts.Program in multiple threads (which is the most memory and time intensive piece of the type-aware parse!).

So having some notion of "parser-informed bucketing" would be really good for the future state of eslint.

It's worth noting that such a system would also benefit single-threaded linting as well! Why? Well right now typescript-eslint has to treat eslint's parsing as "random access", even in single-run mode. We don't know what order ESLint will ask us to parse files in AND we don't know what files are being linted AND we don't know if a file will be linted multiple times.

This is an issue because it means there's no point at which we can drop a project from memory because we don't know if we'll need it again - so we just accumulate memory as we setup more projects over time. For large workspaces this means we will eventually cause node to OOM.

If we could hint to eslint that we'd prefer if the files are linted in a certain order such that each project is linted to completion before starting on the next project. In conjunction with the above mentioned "session context", we'd be able to know exactly when the current lint run is "done" with a project. This in turn means we can ensure that we keep exactly one project in memory at a time - ensuring we're not going to cause an OOM!


Worth mentioning that I have played around with the idea of parallel linting bucketing by project and it works really well, and I saw some not insignificant perf wins from it!

This PR includes a proof-of-concept CLI for typescript-eslint which sits in front of eslint to split work across threads where each thread is essentially just eslint <...files from project>.
typescript-eslint/typescript-eslint#4359

You must be logged in to vote
2 replies

So having some notion of "parser-informed bucketing" would be really good for the future state of eslint.

Do you have an idea of what this might look like?

If we could hint to eslint that we'd prefer if the files are linted in a certain order such that each project is linted to completion before starting on the next project.

There’s an interesting tension here — ESLint currently works on files, not projects. If ESLint gets a list of files to lint, what would happen after that? Pass that to you for you to further dig into the file system? And what about the browser playground? Without a file system, what would change?

Without a file system then you're just working in a single-file world. It's entirely possible to mock out the FS and have everything still work as intended. EG our playground (https://typescript-eslint.io/play/) is fully set-up for type-aware linting. You could easily do a completely virtual filesystem as well (which is how a lot of browser-based IDE environments work with TS).

the TS team even has a package to make this easy to do (https://www.npmjs.com/package/@typescript/vfs)

In terms of what I'd expect - ESLint works up-front to determine the entire set of files to be linted, then it sends them to the parser for "bucketing". The parser does whatever work it needs to do to determine the best buckets for the set of files.
For TS it would be a manner of us getting the list of tsconfigs from the parser options, then interrogating the configs to find out what files they contain, and then bucketing files around those lists. In a nutshell we'd want to ensure that each tsconfig is included in just one thread to avoid duplication and wastage.

Hi. swc author here.
Actually, swc has swc_ecma_lints and has lots of rules, but as the swc core team is quite small, we decided we can't maintain it, and I'm fine with handing it off. So it may help.
Feel free to contact me (even by email or twitter/discord dm) if you are interested.

You must be logged in to vote
2 replies

Thanks! I’m afraid we also don’t have the bandwidth to maintain a separate project.

I meant handing it off to the eslint team completely

This comment has been minimized.

This comment has been minimized.

Not great to "minimize" comments to hide their upvotes. If longevity is a concern then I wouldn't discard people's comments suggesting TypeScript.

You must be logged in to vote
0 replies

ESM with type checking. I don't want to rewrite in TypeScript, because I believe the core of ESLint should be vanilla JS, but I do think rewriting from scratch allows us to write in ESM and also use tsc with JSDoc comments to type check the project. This includes publishing type definitions in the packages.

Is there any better reasoning for this other than "I think it should be written in vanilla JS". I mean.. TS compiles to javascript. You know this. Using JSDoc comments to do type-checking is just a more roundabout and buggy way of using typescript. What's the point?

You must be logged in to vote
0 replies

Well, JS + JSDoc is probably still better than no types at all man_shrugging

You must be logged in to vote
0 replies

As an Eslint user, my main pain is the CPU & Memory usage of Eslint. I work with a monorepo which has a lot of different packages, each of which has its own eslintrc (root: true). This is needed because packages are written either in JS/TS using special files (vue/pug) and destined to browser/node/chrome extension/outlook addin/universal. So all these projects require different configurations. When running Eslint in this mono repo, there are two options: starting one Eslint process from the root or starting one Eslint instance in every package folder. The first approach is the fastest but we needed to increase node's memory limit as typescript-eslint resources do not seem to be freed once an Eslint root has been processed. The other approach is very slow as initialization seems quite slow.
This initiallization issue is also problematic when running eslint as git pre-commit hook. The time git takes to lint on commit is directly proportional to the number of files (because we need to pipe the staged file into eslint, one at a time, staring as many new eslint processes as there are committed files to lint).

You must be logged in to vote
1 reply

So what I'm trying to say here is that it would be nice if the new version of eslint allowed:

  • freeing resources no longer needed because all related files have been processed (eg. A ts.program)
  • have some way to cache/persist/share session data across multiple processes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Category
Labels
core Relates to ESLint's core APIs and features needs bikeshedding Minor details about this change need to be discussed breaking This change is backwards-incompatible tsc agenda This issue will be discussed by ESLint's TSC at the next meeting needs design Important details about this change need to be discussed
31 participants
and others
Converted from issue

This discussion was converted from issue #16482 on November 16, 2022 18:54.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK