March 11th, 2025 × #typescript#performance#compiler
Typescript Just Got 10x Faster
The TypeScript team has ported the TypeScript compiler and tools to native Node code, realizing about a 10x performance improvement across parsing, type checking and emitting.
- Announcing a 10x faster TypeScript compiler by porting to native Node code
- New native TS compiler can compile full Node.js codebase in 5-6 seconds vs over a minute previously
- New TS compiler will allow instantly invoking TSC from terminal for 10x faster type checking
- Ported TypeScript compiler to Go language for speed, data structure control, garbage collection and great concurrency support
Transcript
Wes Bos
Welcome to Syntax. We got a really exciting one for you today. A major announcement that I never thought would would necessarily drop this year or or maybe even ever. I jumped on a call last week with Daniel, and he says, hey. Can't tell anyone, but this is what we're working on. And I haven't been Node excited in in many Yarn. So I'm super stoked. Today, we have, Anders Heilsberg on. He is the creator of TypeScript, lead architect of TypeScript amongst many other things. You can it's on Wikipedia. You can look it up, and Daniel Rosenwasser, who is the principal product manager of TypeScript. So welcome so much. We'll get right to the point. What's this big announcement?
Announcing a 10x faster TypeScript compiler by porting to native Node code
Guest 1
Who you wanna let the cat out of the bag? Or
Guest 2
sure. Well, the big announcement is that, we are porting the TypeScript compiler and toolset to native Node, and we are realizing about a 10 x performance improvement from that effort. That is the really short message.
Guest 2
So we're 10 x ing, the the the speed of the of the the TypeScript compiler and toolset.
Wes Bos
Man, that's awesome. And that, to be clear, that is that is both parsing and type checking of code bases?
Guest 2
This is log, stock, and barrel, the the the the whole. Yes. Scanning, parsing, binding, checking, emitting.
Guest 2
The whole pipeline will be native code. Yes. Wow.
Guest 2
And it's a port, not a not a start from first principles effort, which means we are literally aiming to have a plug and play replacement for our existing toolset that produces precisely the same error messages and behaves, in general, exactly the same except 10 times faster.
Wes Bos
So one day, I'll be able to just type TSC into my terminal, and it'll be 10 times faster?
Guest 2
That is that is exactly it. In fact, with with the stuff that we with the stuff that we open sourced now, that is what you can do. Yes. Wow. Now, of course, it's not Deno. But, you know, we compile the brief take a, like, a marquee project that everyone's familiar with, this Versus Node. And that's, you know, forty seven forty seven hundred files, one and a half million, one point six million lines of Node.
Guest 2
And we compile and everything from parse, check, bind, demit, the whole thing in about five and a half, six seconds.
New native TS compiler can compile full Node.js codebase in 5-6 seconds vs over a minute previously
Guest 1
And that used to take more than a minute. And to build on what Anders just said, yeah. You you'll be able to type in TSC and get, like, a blazing fast. I I Node everyone loves that term. Like, Pnpm extremely fast.
Guest 1
Blazing fast. Blazing fast. Oh my gosh. You're wearing the shirt. I love it. Okay. Maybe I guess we should get one of those too. Yeah. Century dot shop. You can get it. It has Syntax label right on it. Yeah. Love it. Oh, that's so funny.
Guest 1
Did you wear that for the show today?
Wes Bos
I did. I did. I woke up, and I was like, today's a big day. What t shirt should I wear? I was like, lazy ass is the tee that we gotta wear. Yeah. Okay. I love that. So so yes. It's not just if you run TSC
Guest 1
in your in your terminal. Right? And it's not just, you know, in your CI, and that's not it's also because TypeScript has always been the thing that powers your language experience in your favorite editor, like Mhmm. Versus and Versus Code. Right? When you open up a project and you hover over something or you go to definition or you request completions, like, the same types of code Bos that powers the compiler also powers that experience. Right? So that means if you start up and you load up a project, we get right to it, and we're able to load up enormous projects in a fraction of the time of what it took in the old compiler.
New TS compiler will allow instantly invoking TSC from terminal for 10x faster type checking
Scott Tolinski
That's amazing. And so Versus Node naturally will just inherently feel faster all around. Right? Yes. And type checking your entire project in one fell swoop is gonna be just endlessly easier. Right? Is that is that the the whole goal here?
Guest 2
Well, I mean, once once you're 10 x faster, you know, you Yeah. You could you could potentially not have sub projects or whatever and just make a big mono mono project out of it if if you want it. But I think that that the thing that I I'm excited about is just that a lot of people have that compile and type check phase in their in their inner loop dev cycle. Right? And that's where all of a sudden, instead of having to sit and wait for like twenty seconds or thirty or whatever, you know, your size of your project is, it's basically instantaneous.
Guest 2
Amazing. Yeah.
Guest 2
Yeah. I I think it also you know, it opens up opportunities for rethinking what we do with the language service because it is now so fast. Yeah. Especially when it comes to things like AI, you know, and like providing more contextual information for LLMs, you know, on the fly, checking semantically checking element LLM output, as it's as it's being produced by the AI. Right? So Yeah. It's that things that just weren't meaningful before all of a sudden become possible.
Wes Bos
Man, I didn't even think of that. Like, immediately, my mind went to, I can now type check my entire project Yeah. On maybe even on key down, Right.
Wes Bos
You know? Oh, yeah. Right. Or on save.
Wes Bos
Inference JS, like, a lot there's always this, like, kinda dance between, like, a super powerful inference and generating types. So now I assume this means we'll be able to do a lot more crazy inference.
Guest 2
At least, no.
Guest 2
Okay.
Guest 1
This JS this has been one of those sort of funny conversations that we've always talked about, regardless of the rewrite, JS, like, are you all familiar with the with this concept of induced demand, where, like, you know, as you build out more roads, you just end up with more drivers, and so traffic doesn't actually get solved? Yeah. Totally.
Guest 1
So it's sort of like, you know, we we we've always been wary of, like, well, we're not trying to give more rope, to people.
Guest 1
We're we're but but we would we do want things that are, like, more reasonable within the type system to be a lot faster. Right? And so, you know, like, I think we've had conversations around, you know, there's all these sort of limiters and things like that to make sure that you know, our type system is expressive, and it's Turing complete.
Guest 1
And Yeah. You know, we've had to put in limiters to make sure that when you when you you know, you just said, you've Wes you do a keystroke in your editor, you wanna get errors really quickly. Well, if you're running a program to run a program to figure out the errors
Guest 2
Yes. I mean, you've seen you've seen Node running in the type checker right there. But Oh my god. There are examples like that, like SQL parsers and all sorts of clever ways of using template literals and picking them apart with conditional types and and and inference and so forth.
Guest 2
And, of course, that generates a lot of work. And the thing that we are a little bit afraid of is that people will just stop optimizing sooner.
Guest 2
Mhmm. Because, oh, it only takes ten seconds. Right? And so now instead, you're just gonna generate 10 x more types and stop at ten seconds as opposed to actually get it down to where it should be. Right? This is the problem of having a Torrid complete type system. You are basically there JS no guarantee that anything will terminate.
Guest 2
That is that is just an artifact of of of Turing completeness. Right? And so That's interesting.
Guest 2
So these type constructs we have are incredibly powerful, but with great power comes great responsibility. And and Mhmm. And, you know, I I'm just hoping that the community will continue to be responsible.
Wes Bos
Well, as someone who has, has hit the limit of template literal types, I may abuse it a little bit.
Wes Bos
How how are you using it, if you don't understand it? Node. I was I was just curious what the limit Wes. So I tried to make a type of phone number where every number ever could be a union.
Guest 2
But, obviously, that's that's Well, I believe in the union tax, we we stop after 10,000.
Guest 2
And and we have not upped those limits, and we're not planning to. Okay. Yeah. You know? So Yeah. So you'll you'll hit the limiter quicker, but it'll be the same limiter.
Scott Tolinski
Okay. Okay. That's good. So how long has this been in development for? And and has it been difficult to not mention it until now?
Guest 2
Yes. It's been difficult for sure. Of course, this conversation about native code is has been ongoing for a long time because everyone knows, you know, that, like, JavaScript comes with some overhead when you run it. And I can talk more about the specific challenges that we face there, but we've also seen efforts in the in the community to to attempt to port TypeScript, multiple efforts Mhmm.
Guest 2
That haven't really succeeded. And largely, I think it's because they were never they never set out to port. They set out to build a compatible native co TypeScript compiler. And we knew from from the get go that that is not what we wanna do. Because if we're ever gonna do this native thing and and move the whole team to, you know, building in native code JS opposed to maintaining two code Bos in perpetuity, we want something that can be a plug and place replacement for for our existing compiler. And the only way you get that is if you port. Because there are just myriad behaviors that Wes you could go one way or the other way. Like, often would say type inference we were just talking about, there are often many correct results, and yet you may depend on which one gets picked.
Guest 2
And if Wes were to pick another one, then problem.
Guest 2
And so we really want the same behavior. And so only by porting the code do you get the same semantics and get for the same behavior.
Wes Bos
So when when you say you you ported it, does it mean you literally just took all the different JavaScript functions and and rewrote them in another language. And and I promise we will get to what language it is. Wes. Yeah. Yeah. Yeah. Kinda. Kinda. Yes. Okay.
Guest 2
And we started so we started prototyping.
Guest 2
I started prototyping in August of last year just because we were curious. Okay. It's like it's time for us to get some data because there's all these rumors. Oh, it's gonna be Node it's gonna be the same speed. It's gonna be two x, five x, 10 x, hundred x, you know, infinite x, faster. You know? People are making all these claims about how much faster, and and we felt, okay. Let's let's spend a little time, try to get a baseline for what what is what is possible here.
Guest 2
And And it was pretty encouraging.
Guest 2
I mean, the porting effort was not too hard. And once we had a scanner and a parser, you know, we kinda knew that, okay, we could project from this. And and it just held up, you know, because those early prototypes warp about 10 times faster and were filled 10 times faster. Now there are challenges, mind you, with with getting the 10 x, and and and we should talk about that because each phase of compilation has different challenges in getting to 10 x. Because the 10 x doesn't just come from native code. Half of it comes from use of shared memory concurrency, which is a thing that we do not have available to ourselves in JavaScript.
Guest 2
So half of our performance gain is from being native code and the other half is from being able to utilize all those CPUs that we all get.
Guest 2
You can't buy a single CPU anymore. You can only buy things with at least eight cores in them, right? Unless you're buying a microcontroller or something. But like desktop CPUs. Absolutely.
Guest 2
Even your phone.
Guest 2
I mean, they have oodles, of course.
Guest 2
And so the game is if if you can't utilize all of those course, you're just leaving a ton of money on the table. And JavaScript basically forced us to leave that money on the table because and and there are good reasons why why JavaScript doesn't support shared memory concurrency.
Guest 2
It comes with a whole slew of challenges.
Guest 2
You know? You everything has to be thread safe, and you have to have race detectors, and you have to have all sorts of things that are way beyond what's meaningful to foist on
Guest 1
someone writing code in the browser Wes which is the natural affinity of JavaScript. Right? It's not just you know, the style of code that JavaScript users would have to write would have to be defensive and, you know, think about these concerns.
Guest 1
The runtimes themselves have always interesting and, you know, they're working on these sorts of things. They they do want to strive to make this accessible for JavaScript users.
Guest 1
But, you know, there are a lot of challenges, like, hey. Your strings internally go through different sorts of representations, and now those have to be thread safe in such in so and so ways because they're supposed they're made in a way that's good for efficient concatenation over and over and over again. Right? And if you make that same string accessible by multiple threads, then, like, now you need to, like do you have to lock on that thing? Like, you know, that's not necessarily gonna be fast all the time. So yeah. I mean, like, the the run times, you know, the they're working on it, but at at the same time, it it it will take some time before JavaScript is there, I think. Okay. Yeah. The TypeScript toolset is very much an outlier in suitability
Guest 2
for for running on the JavaScript platform. Right? I mean, JavaScript is affinitized with UI and the browser, but we have an excellent execution environment in Node Scott JS. Right? But JavaScript was never engineered to be the runtime platform for compute intensive server tools, right? And that's effectively what we are. We do an awful lot of compute, and we have no UI whatsoever, really.
Guest 2
Yeah. We're we're just this this thing, you know, either a command line compiler that that writes to the console or a language service that doesn't write at all but handles JSON messages in a in a highly concurrent manner. Right? So we're very much sorta, like, don't fit the profile. Right? And so Yeah. It's not it's not surprising, you know, that that that this environment is not optimized for this kind of workload.
Wes Bos
Yeah. So for as as long as it's worked, TypeScript is not the best language to write TypeScript in, but TypeScript is a fantastic language to write most of the other stuff, applications in this world. Well, it's by building TypeScript in TypeScript,
Guest 1
right, we have we have gotten such a good pulse. Yep. And we've been able to make it that great toolset. Right? And and JavaScript is, like, pretty fast on a whole, right, for for the search of things that needs to model. Right? I mean, it's a highly dynamic language in many ways. Yeah. I would add, I I'm actually pretty proud of the compiler we built in in you Node, like, everyone would tell you you're absolutely
Guest 2
not bonkers, that crazy for building a compiler in JavaScript. Right? I mean, that's what we went and Node, and it it became one of the most widely used compilers in the world. And honestly, its performance is not bad. It's not bad at all if you compare it to other compilers. It does pretty darn Wes. But we've always known that we could potentially make it 10 times faster. Right? So so now we're sort of unleashing the beast finally. Right? Because Yeah. We do have an incredibly fast compiler. I mean, the ability to compile one and a and fully type check one and ESLint one and a half million lines of code in five seconds, That's fast. I mean, 4,700 files. Right? I mean, like, it's Yeah. It's really fast. It's how many megabytes of source code? Is that 50 megs of source Node? Versus Node, I think, or yeah. I mean, it's it's a lot.
Scott Tolinski
Unbelievable. Yeah. Yeah. So I I guess now might be a good time to say if if not JavaScript, then
Guest 2
what is it, and why was that choice made? We ultimately chose to go with, Go, the programming language called Go. And we can get into, the reasons for that, of course. We did indeed experiment with many other languages.
Ported TypeScript compiler to Go language for speed, data structure control, garbage collection and great concurrency support
Guest 2
Everyone, of course, asked about Rust. Why not Rust? And and why not c warp Since, you know, I have a long history with C Warp. And why not my favorite language, x, y, or z? If you look at our wish ESLint, when you consider that we're gonna port, Our wish list includes things like excellent native executable support on all major platforms and native first. I mean, not like some ahead of time compile of of something that first ran us byte Node and whatever. We want to like, something that is optimized for native on all major platforms, first of all. We wanted a language that could give us great support for layout of data structures. In particular, we want the ability to have structs, which is something that JavaScript doesn't do. In JavaScript, everything is an allocation if you're if you're dealing with an object. There is no way of, like, inlining stuff, which means if you have an array of a hundred thingies, you have a hundred allocations as opposed to one allocation if you in line them all. Right? Yeah. So we wanted that level of control.
Guest 2
We knew that we're porting, and that means garbage collection, automatic garbage collection. We have a code Bos that assumes the existence of a garbage collector. And garbage collection has never been a gating factor for us. It's never been a problem.
Guest 2
But it's definitely absolves you from not just implementing a compiler, but also a memory manager, and all of the issues that come along with ownership and blah, blah, blah, and dangling pointers or what what what have you, right? Wes didn't want any of that.
Guest 2
And we were perfectly happy being garbage collectors. So GC was a must.
Guest 2
And then finally, we knew that we were leaving all this money on the table on multi core systems, which now is every system in the universe. And so we wanted great support for concurrency.
Guest 2
And then finally, I would say Wes wanted a language that's simple and Sanity to approach and something that has a decent toolset, because our team is gonna live in this toolset.
Guest 2
And ultimately, with all of those in mind, go one out. In particular, reasons why we didn't choose Rust is Rust does not have a GC.
Guest 2
You Wes, basically, you're dealing with manual memory management. Now it has the borrow checker that saves you from a whole bunch of problems you might run into like cyclic data structures and and whatever. But this so supposed to saving you from this problem also actually, if you have cyclic data structures, causes a big problem for you because now you have to unwind that. You have to come up with new data structures that aren't cyclic or where you where you cheat with the cycles. And Mhmm. Our ASTs, I mean, we navigate our our apps our abstract syntax trees. Right? They are cyclic data structures that have with pair of pointers, and every node has a bunch of child pointers, and you can navigate up and down the tree. And we do that everywhere. Our language service does it everywhere. Our compiler does it everywhere to type check. And so we're we're running around over these trees all over the place. Right? And and having to drop all of that is that now you're buying yourself a whole bunch of problems that no one bargained for. And so when you sum all of that up, Go just was the it was the natural place for us to go. And I I I will say also, you know, what what interesting artifact of the TypeScript code Bos is that it was from from the get go written in a very functional style.
Guest 2
It's sort of like functional style JavaScript as opposed to class style JavaScript.
Guest 2
We have very few classes in there. The type checker the entire type checker is one function called create type checker that contains, like, 50,000 lines of code with, like, a whole bunch of smaller functions inside. And the the global Scott, in quotes, of the type checker are the locals of the create type checker function. And it then returns an an object with a bunch of callback functions on it, you know, so you can call back into the closure and and etcetera, etcetera.
Guest 2
Now that style of coding is very that is precisely how Go works also.
Guest 2
Go does not have classes. It's all functions and data structures. And so so in that sense, it was also a very natural fit.
Wes Bos
You just Sanity 50,000 lines. So I, like, I went to GitHub and opened up checker.s s see if we were exaggerating. It's No. It's it's not like, oh, yeah. Once you put it together, all the pieces and all the imports is 50,000. No. The literally, the file is 53,041
Guest 2
lines long. Sure. And I'll tell you why. It's because that's the only way that you can you can get closed over state in JavaScript. There there there basically, there are two ways of having state in JavaScript. You can either have it in objects.
Guest 2
Objects, you can never in JavaScript guarantee privacy in in your objects. Well, with later versions of JavaScript, you can get pretty close with symbols and private private fields Yarn a class thing only. But ever since day one, you've been able to create closures in JavaScript where you close over local Scott, and that local state is completely sealed off. No one can access that state other than callbacks that you return out of your function. Right? But you can't have local functions that aren't in the same file as you.
Guest 2
And therefore, they all have to be in the same file.
Guest 2
Or else we would have broken checker. Ts into 10 files a long time ago.
Guest 1
The the closures are also, like, much more optimized pnpm engines just because they can be. Right? Like, the engines know exactly how to find where a variable lives in a closure, and so it's it's a lot faster than, like, storing it on objects. And with the Go version, we do take advantage of, like, the ability to have state that doesn't need to be closed over as much.
Guest 1
But it's but it's a simple transform. Right? It's not that different in the end of the day. Right? Yeah. If you go back in history, originally,
Guest 2
the first the very first TypeScript compiler that we've released was actually written in a more class like style. And we found that to be not at all efficient on on JavaScript because what happens with with with with classes is whenever you call a method on a class in JavaScript, you're really doing it's always an indirection. All methods effectively are virtual in JavaScript. Right? Because anyone can walk in and assign a new value to this property and that's what your methods are. Now that in turn means that inlining is basically off the table, because the VM doesn't know where this call is gonna go because, like, a couple of instructions later, it might go somewhere else because someone assigned a new function pointer here.
Guest 2
But with with local functions, the VM does know where the call goes. There's no polymorphism there. And that actually allows Npm to do inlining, and plus it makes the calls faster and so forth and the state access quicker. And and so so functional style JavaScript actually ends up being more efficient, interestingly.
Guest 1
Mhmm. I wanna actually use this as a way to just, like, point out the amounts that you're hearing about how much we've, like, looked into all of these different things JS how much TypeScript has been optimized to run on modern jobs or dungeons. Right? We know about the inter well, like, we're not full experts on engine internals, but we have dove into a lot about engine internals. We have people on the team who have written, like, Versus Node extensions to analyze the the sort of, like, things that v eight is doing under the hood so that we can try to find the optimization points, places where, like, inline caches are not gonna be as efficient. And that's how much is going into this stuff. Right? So, like, on one hand, you have all of our, you know, mental energy going into that. And so, you know, the other side of this would be, like, you know, we we were talking about different language choices. Right? So if you pick something like Rust that that doesn't even have, like, a garbage collector runtime. Right? Well, that's that kinda ties into that question of, like, port versus rewrite. Right? Because as soon as you start to have to make some of the same calls around, you know, okay. I wanna represent all my data structures in TypeScript, all my algorithms in TypeScript the same way, well, you really can't because you have to rethink every single one of the data structures and how they hold on to memory and the lifetime. And that's, like, the key thing is, like, you you do think about the lifetime. That's a feature of that language.
Guest 1
But if you're trying to port into another language and you have to rethink, oh, well, technically, this thing could hold on to that thing and the other and and and that's and and now you're not really porting. You're rethinking every single part of your application in the lifetimes.
Guest 1
And the key Wes, if we were gonna do this because for for years for years, we were being asked, when are you gonna rewrite this thing in Rust? From all sorts of users who were hitting, like, the scaling issues and the and the memory issues, out of memory issues.
Guest 1
When are you gonna do that? So if you think if you're gonna say, alright. We're gonna rewrite it. That's gonna take a long time.
Guest 1
Right? Whereas Wes Anders went off and, like, started off the first prototype, he was able to port a huge chunk of the parser.
Guest 2
I don't even remember how fast you were able to do it from start to end. I think that the scanner and the parser, I had it done in, like, a month or a month and a half.
Guest 2
Wow. And and then at that point, we were getting numbers that that have borne themselves out. And like I said, I said, it might be interesting here to to to talk a little bit about how that is. And and so when porting to Go, you you asked earlier, did you did you literally port function by function? So Yeah. In in the beginning, I was manually porting function by function. Very quickly, we wrote a tool that would take our TypeScript code Bos and then just turn all of the TypeScript syntax into Go syntax. Now it's meaning that it's syntactically correct. So we would literally take the TypeScript AST, and then we had an emitter that would emit out Go.
Guest 2
Now it's syntactically correct, but it's not gonna combine because these functions don't exist and blah, blah, blah.
Guest 2
Node certain constructs, but it's like it just changed the constructs in JavaScript into valid Go constructs. Operators are slightly different and there's no ternary operator and blah, blah, blah. That sort of thing, right? Node, that meant that we basically had these we had this code Bos in Go that had like tens of thousands of errors, but we could copy and paste a function at a time and then fix it up.
Guest 2
And that was really Scott of how the port progressed.
Guest 2
Node, the one thing that we couldn't copy and paste were all the data structures, and the data type definitions, because the object Node in Go is quite different from JavaScript's object model. Okay. Well, I should say to be honest, what I should say instead is JavaScript's object model is quite different from any other native code language's object model. Right? Because it has like crazy things like expando property. You can use properties. You just go up to an object and jam a property onto it. Oh, yeah. That's fine. You know, you you can compute the property Node, I mean, even, right, with that as an expression.
Guest 2
You could do all these nutty things. Everything converts to everything, you know, there's there's and and then the TypeScript type system has union types, right, and and these fancy constructs that, you know, you don't really find in other programming languages.
Guest 2
So we had to do some serious reengineering of our data structures.
Guest 2
But the Node, actually, once we once we found the right way to do that, and that was actually a journey in and of itself. Because in Go, you sort of start out thinking, oh, well, I'm gonna have an AST.
Guest 2
It has about 300 different kinds of Node. So probably I should have an interface that represents a node, and then I should have interfaces for each of the different kinds and whatever. And we sort of started down that path.
Guest 2
That turned out not to be the right way of doing it. And there are a bunch of reasons why, you know, but and and some of them are sort of, like, reflective of of of the way Go treats interfaces. But we ended up sort of with a more hybrid model where where we maintain the same kind of data structures that we have in the TypeScript compiler Wes every AST node has a kind property that tells you what kind it is. And that's fine. And in JavaScript, that's all you need because then you can once you know the kind, then you know, and then I can safely access these properties and they'll be there and they'll have values.
Guest 2
And JavaScript will just give you undefined if they're not there. But in Go, you can't do that. You can't just cast something because it's a type safe language. It is not gonna allow you to cast it. I mean, it's like you either have this property or you don't have it. And the only way to get polymorphism is by having interfaces. So we actually have a hybrid model where our AST nodes and also type checker nodes embed an interface pointer in a node that points to itself but gives you the ability to polymorphically treat the node.
Guest 1
It's really trippy.
Guest 2
It is, but it works remarkably well because now you can implement casts that Wes you first, you can check the kind of the Node and go, oh, this says I'm identifier. In that case, I can safely call the dot assignifier method on the Node, which casts that embedded interface to star identifier.
Guest 2
And it works. And so now we have a way of porting our Node, and it doesn't look completely different. We don't have to go re engineer everything, right? And that then allowed us to attain a certain velocity in in getting there. But what I was gonna get at then with with the with the port was that we get, like, three x from peaking in native code simply because we don't have JIT overhead and we have a more efficient object model and we're compiling straight to native and and so forth. Right? Mhmm. But the other three x that we hear, three to four x, we get from concurrency.
Guest 2
And it turns out that certain problems in the compiler are what I call embarrassingly parallel.
Guest 2
Things like parsing source files and binding source files.
Guest 2
All that parsing a source file consists of is load a file into memory in a string and then build an AST, I. E. Build a cyclic or a data structure that allows you to navigate that source file.
Guest 2
That requires no communication with other source files or there are no cross references. So it's just like this local compute intensive problem that you can run on one core. And if you have 16 cores, you can go 16 times faster by having each of the cores.
Guest 2
That's what we do. So the parser is embarrassingly parallel. We literally start off for to parse Viasco, we start off 4,700 go routines, and then we each give them a source file.
Guest 2
And then the scheduler just schedules, and off they go to the races. And if you have 16 cores, you go 16 times faster. Man. And then we just wait for all of them to finish, and then now you're done.
Wes Bos
So someone who has 32 cores on their their laptop will see that step? Yes. That parsing step JS Node 32 times faster? It would be 32.
Guest 2
Wes, no. It would be a hundred times faster because it's three times faster because of native and 32 times faster. Oh, yeah. Yeah. You're right. Because, like,
Wes Bos
if if you could run the JavaScript on all the cores, it would be 16 times faster. Right? Yeah.
Guest 2
Now the reason this works, though, if you think about it, is that when all of these parsers then or all these Go routines are done, well, now you have a bunch of AST sitting in memory.
Guest 2
But because you have shared memory concurrency, they're all sitting in the same memory, and everyone can get at it. Now the only way in JavaScript to get concurrency is by having web workers.
Guest 2
But web workers are by design completely isolated and you can't communicate between them other than bypassing messages and JS is JSON or having shared array buffers. But those are not objects. Those are just blittable bits. Right? And so we could start off 32 web workers, add parse 32 times faster, and then we would have 32 random collections of source files sitting in separate isolated memory spaces. And we Sanity, but they can't talk to each other. And the only way to get them into the same memory space is to marshal it, which means turn it into JSON, which takes longer than it took to parse the source code in the first place. So now you've given it all up. And so that was the problem that that we always ran ran our heads into the wall on with with with JavaScript. Right? Mhmm. Now that problem simply does not exist in Go because it it has shared memory concurrency JS does most modern native programming languages. But as I said earlier, that comes with a whole bunch of like, your entire runtime has to be thread safe for that even to be possible.
Guest 2
The entire JavaScript runtime is manifestly not thread safe or was never engineered to be. It is, in fact, a feature of JavaScript that there's only one threat. Right? So that was just a thing that, oh my god, we'll just never be able to take advantage of of this latent
Scott Tolinski
power, you know, that everyone has. Right? And if you want to see all of the errors in your application, you'll want to check out Sentry at century.i0/syntax.
Scott Tolinski
You don't want a production application out there that, well, you have no visibility into in case something is blowing up, and you might not even know it. So head on over to century.i0/syntax.
Scott Tolinski
Again, we've been using this tool for a long time, and it totally rules. Alright.
Scott Tolinski
So when you set out to do this part, did you have the general ideas of where you'd find all of the savings from the Scott? Or I'm sure you're still familiar with the code Bos in general.
Scott Tolinski
Did you say, alright. Let's get the port working first, and then I I know exactly where I'm going to tackle, or was it a lot more exploration?
Guest 2
There Wes a bit more exploration. I I will say one thing that so we know our compiler very Wes. And and the compiler, like I said, is engineered to be Scott of a very functional piece of code. Right? And and which is which is wonderful because that means that you can do things like these embarrassingly parallel, problems to build the data structures, and then you can stop modifying the data structures and treat them as immutable, and then share them amongst multiple other threads in your next phase of compilation.
Guest 2
Right? And that's what we do. First, parsing, you know, that builds up the ASTs, and then binding, install simple tables in the ASTs. And then from there on out, we don't modify them. And that means we can now have multiple other things party on those ASTs at the same time GitHub having to incur locks and synchronizations or mutexes or what have you. Node, we knew upfront that all of these embarrassingly parallel things like parsing and binding and emitting could be scaled linearly on the number of CPUs you have available on on your machine. However, those are the things that also only take about a third.
Guest 2
In aggregate, they represent maybe a third of the compile time of a typical program. The two thirds that remain is the check phase, the type checking.
Guest 2
And type checking is not so kind as to be embarrassingly parallel.
Guest 2
Because if you think about it, type checking consists of, well, especially JavaScript, a language that has global script files, right? That that one file over here can reference another file over there, and or modules can import each other. So everything can reference everything effectively. And that means you've got to build up the entire program in a data structure.
Guest 2
And then as you start resolving TypeScript, you know, let's say, const x colon foo. Well, now you've got to figure out where foo is. Oh, foo was imported from over here, and now you gotta go resolve that stuff over there. So everything jumps around everywhere, basically.
Guest 2
Right? And that's a lot harder to parallelize.
Guest 2
And we've been scratching our our heads about that for a long time, like, what are we gonna do? But we had this working theory that the interesting thing about a type checker is what output does it produce ultimately, like in a batch compile? So let me back up again. We don't really have a speed problem when it comes to providing language service facilities, like hover, add statement completion, or whatever. Those execute in so little time because a compiler is so lazy that that it only resolves a little bit of stuff and then it delivers the answer. So we have no problem responding at less than a hundred milliseconds, generally speaking.
Guest 2
But when it comes to give me all the errors in this program, well, now we gotta go visit everything.
Guest 2
Right? And this this issue of of visiting everything, while in our current compiler proceeds entirely in serial serially.
Guest 2
Node file at a time, visit the entire AST, resolve all the types, check that they are assignable to all these other types or or whatever, all the various checks that need to be done. Right? But we have this working theory that that the interesting thing about type checker is what output does it produce when you're when you're batch compiled? Well, ideally nothing.
Guest 2
But worst case, it produces a list of error messages, right? That's really its output.
Guest 2
Node, we already have a compiler that is lazy by nature and doesn't need to type check the entire program to produce the errors for a a single file. In fact, we have the ability to just type check one file.
Guest 2
Now as warp type checking this file, that that means we're gonna visit the AST, walk down all the nodes in that file, and check every and and, of course, they're gonna reference stuff in other files. So we're gonna go do some work resolving types over in this other file and that other file, but most of it is local. And then we give each of them a quarter of the file. Well, we give them the whole and then we give each of them a quarter of the five. Well, we give them the whole program, but then we tell each of them to check a quarter of the five.
Guest 2
And off they go to the races.
Guest 2
Each of them might resolve the same type. So some types will be resolved four times, but most of the types are local.
Guest 2
And then we can just come in afterwards and harvest the error messages from each of the type checkers and eliminate the duplicates, and then there you are. That's because it's shared memory?
Wes Bos
That is again because it's shared memory. Yes. Yeah. Right. Oh, yeah. So it just had the big
Guest 2
uh-huh. So that's the cheat, if you will.
Guest 2
Yeah.
Guest 2
And it's kind of beautiful because it means that we don't have to go in and make the entire type checker thread safe because there's no contention at all. Each of these type checkers are completely isolated.
Guest 2
The only thing they share are the immutable ASTs, but they build up their own state when it comes to the types. And that means they're running full throttle in parallel, resolving types, sometimes resolving the same type, but they don't know that.
Guest 2
But they run fast and they Scott. Right? Now the drawback here, of course, is that we end up consuming a bit more memory because we have all of these situations where like all of the built in types in the standard library, we end up resolving n times if you have n checkers, right? Because everyone refers to a race, so that gets resolved four times, etcetera, etcetera. But but like I said, by and large, things are local. And so what we see is Wes we've run with four checkers, which is what's currently hardwired into into into the code Bos, we consume about 20% more memory, but we go two to three times faster.
Guest 1
And and, also, how much memory are we saving? Right? We're we're actually using less memory now because we're able to more efficiently allocate data structures and things like that. And when you are duplicating some of that memory through, you know, multiple checkers and things like that, you know, it's it's maybe not as much of a savings, but you're still able to go way faster, and you're still consuming generally less memory than, like, the JavaScript version of TypeScript today.
Guest 1
So it's it's an all up win, really. And if you Scott more, you can throw more at it as well.
Wes Bos
And are there there Yarn parts of TypeScript that are specifically slow? I know we had we had Ryan Dahl on. He's creator of Node. Js, and and he he was on talking about Deno.
Wes Bos
They introduced a, like, a no slow types ESLint rule.
Wes Bos
Are there specifically
Guest 1
parts of TypeScript that are slow, and and now are they better? Yeah. No. No. No. That that's a really good question. And it kinda ties to a lot of, like, what we were specifically trying to work on in the JavaScript code Bos before a lot of this. Right? So that that no slow TypeScript that I think you mentioned, if I'm remembering right, that is sort of a way for library authors to say that when they wanna publish their declaration files, the sort of things that describe to the checker, hey. Yeah. There's a JavaScript file here, but here are the types for it. You don't have to read all the JavaScript. Right? So those DTS files, they can be generated a lot more efficiently and faster if you just explicitly write what the return types are, what the types of your variables are, things like that. And that's because that can all be syntactically calculated. Right? And or we can do less work trying to generate the best name for a type and all all this other stuff from the compiler.
Guest 1
So so we were trying to work on things like that on on the JavaScript code Bos too. We have something called isolated declarations. That's, like, a very heavy version of that. There is also a new, a newer mode, called no check, which doesn't Scott check, but it it it checks less if you're asking for declaration files and things like that. So so, basically, it tries to involve the check the the type checker as little as possible to do certain work like emit.
Guest 1
But I I think the the thing that that kinda segues into is that the type checker is the most computationally involved part of our entire system. Right? I mean, there there are definitely these these issues where, like, you ask someone why something's slow. Right? For JavaScript code base at scale, they're gonna say, well, when I open up my editor, it takes a few seconds. Usually, what they mean is not that the editor takes a few seconds to open, but, like, you might see a little spinner that says, like, loading, blah blah blah, and its dependencies. And that's from that's from the TypeScript language service.
Guest 1
That's from us trying to read through your file system, parse, and do a whole bunch of stuff before we can actually serve a request.
Guest 1
And that that comes partially because, you know, parsing is literally just, like, allocating constantly, like, going and chomping up your memory.
Guest 1
And then the garbage collector says, wow. You've used a lot of memory. Like, do you wanna give it back? And we say no. Right? Absolutely not. We're we're we're not done yet. Right? So so a lot of that has been sort of solved by being able to, like, throw Node routines at that at that process, being a little bit faster, having this background GC thread that is just, like, really, really doesn't feel like it's it's adding any time to the runtime of your program. But I I would say for a lot of the other things, like, you know, you ask for completions, you ask for auto imports. And for all the auto imports, it needs to figure out, like, you know, what's the best way to display that? That can be pretty heavily a lot of computation on the type checker just to try to figure out, like, what's the type of each of those things so that I can I can print out a nice name for some of these things, which sounds nuts? But that's that's where those Node Scott of things often come up. Or you might have, like, one of those examples where you've got, like, someone trying to parse out SQL, like Anderson.
Guest 1
Or or you're right. Like, because there's a lot of who knows how much work needs to be done here to actually check that, like, you know, this thing is compatible with this, and then you end up just saying, like, yes.
Guest 1
And and it seems kinda mundane. Right? You don't see all that work. You don't it doesn't tell you, like, you know, I spent, like, thirty milliseconds just on this expression because you're just I only say, yeah. Sure. Everything worked the way it was supposed to because you wrote the Node, and it's been working for the whole time, you know, since you started.
Guest 2
We Wes have we have a lot of you could call them advanced. You can call them esoteric. You can call them whatever type constructs or type constructors in in the language, constructs that I honestly have not seen in in in type theory before. Like like, for example, pnpm access type, the ability to that are completely generic Wes you can have like a T sub K.
Guest 2
So on this type, I'm getting the type of this property.
Guest 2
But of course, t and k could both be unit types. So really, you're sort of like fanning out in two dimensions, you Node? And very quickly, those can be computationally incredibly expensive if not used correctly. Right? And it can be hard to know when they're expensive. I I will grant you that. There are times where I wish we had a type debugger, you know, that would that would tell you, oh, here here's here's you know, you're looping here way too many times, or this is this is too expensive. So that's something down the line we may wanna may wanna look at. I think also, one thing to keep in mind is that part of the power of TypeScript's type system is that it's structural.
Guest 2
But structural type systems are also the most expensive kind that you can implement. Because when you wanna know if type a is assignable or equivalent or a subtype of, say, type b, well, in a nominal language, well, if let's see. Does b derive from a? Yes or Node? That's a pretty quick check. And if it doesn't, well, then they're not compatible.
Guest 2
Well, in a structural language, that doesn't matter. It's it's all about the structure.
Guest 2
So you gotta go check all of the properties. And, of course, those properties might themselves be of type a or b or whatever. And so it all becomes cyclic and recursive and explodes in your face pretty much all the time.
Wes Bos
That's why we have those nested errors?
Guest 2
Wes, they're part of it. Part part partially why. Yes.
Guest 2
But it takes for a lot of clever algorithms to tame that beast. Right? And we have a lot of that in there. But but, again, when you then combine it with with with conditional types that are Turing complete because they can recursively evaluate themselves, then you you Scott to like, it it it's it could be tricky.
Guest 2
Yeah. Yeah. It's it's super powerful, but it's also super dangerous, right, if you if if you get too esoteric.
Wes Bos
I wanna ask you about some of the larger TypeScript code Bos you've seen because I'm I'm always curious about what's out there. And I'm sure you you've both at Microsoft as well as I'm sure you've heard from people that have it. So Versus Code is obviously a large code Bos. Would you say it's 50 megs? There's bigger out there? Yeah. Yeah. There's definitely
Guest 1
bigger. Right? We we've and and we've talked with companies internal to Microsoft and external. Right? So, like, we we know that Teams at Netflix and Airbnb and I mean, if you have a big website or a big web app, it's almost definitely written in TypeScript these days. Right? And and many of them are not, you know, what they call greenfield versions of a code Bos. Right? They they often Scott as JavaScript and then move to TypeScript because the scale was so large already. So I I know Airbnb has well, I think this JS, like yeah.
Guest 1
They they've they've spoken about it in public at conferences and whatnot, but they have a very, very large code Bos that, you know, previously, for a while, just had, like, a lot of issues just opening up and and things like that. And they've had to adopt project references and whatnot.
Guest 1
I know that, we have certain chat apps at Microsoft and email clients and and data analytics platforms of, you know, trying to slice and dice data, and create dashboards and things like that that are all, on the very large scale. Right? So, like, you know, Outlook and Teams and Power BI and and then some. Right? So many web projects and, like, you know, electron apps that we've developed are all in TypeScript.
Guest 1
And they are dramatically bigger than the Versus Code code Bos. Right? To the point where some builds right? Like, I think we said in some cases, like, to to read all of Versus Code would take, like, a minute. Well, you know, many of these are, like, hundreds of projects wide or, like, even thousands of projects large. Right? And so especially if you're doing a lot of the same work and and and, you know, scaling projects is a little complicated.
Guest 1
That can take, like, over thirty minutes ESLint some cases. Right? So we are working with some of those teams to try to figure out, hey. How do you not spend thirty minutes on a build even on the JavaScript version of TypeScript? Because we do think that there's gonna be a long period where the JavaScript code Bos is gonna continue being the primary thing that people use. Right? Okay. And we're hoping to keep the the native port of TypeScript that we sometimes will call Project Corsa, probably, we'll be calling it, like, the TypeScript seven point o lineage.
Guest 1
And then the six point o, the current JavaScript code base current TypeScript code base will continue in Scott of the six point o, like, lineage.
Guest 1
And many people will rely on that for API and whatnot. So we still need these teams to scale and reduce their build times and try to figure out, like, how to how to dig in. And we have a lot of diagnostics tooling there.
Guest 1
But, like, if you think about that, right, like, if if you can get thirty minutes out to three minutes, you know, for a very, very big project, like, it's not your favorite iteration loop, but it's it's significant. Right? It's a huge win for CI. Right? Yeah. Yeah. We had,
Wes Bos
Zach. He's, like, he's working on RS Pack.
Wes Bos
He works at ByteDance, and he said he said, like, their thing took, like, thirty minutes to build or something like that. And he Scott they got it down to several minutes. And he he says, like, millions and millions of dollars of just, like, developer salary and, like like, CI. Their CI bills went down quite a bit. It's it's significant when it gets that large.
Guest 1
It's it's hard. Right? I mean, I think this goes beyond TypeScript as a specific build tool, but it's it's all about how you Node? I I I think if you have a team and you have a person who is very invested in putting together a good setup, like, having a good infrastructure person who loves to, like, make everybody else on your team happy, it goes a long way. Right? And, you know, to some extent, TypeScript itself is trying to do that for a lot of teams. Right? We're trying to say, hey.
Guest 1
We're gonna tell you when things go wrong before they go wrong. We're trying to give you a good editor experience. And then the next step of that is, like, well, like, what are we doing to give you a big chunk of that win out of the box that you happen to do that? So that's part of the native effort. Right? Like, you go for thirty from thirty minutes to three minutes. You go from thirty seconds to three seconds. Right? That is so huge.
Guest 1
And, you know, some of the stuff that we're doing right now is, like, a naive port, and we're gonna get the real stuff with more of the optimizations as well.
Guest 1
So there's there's more money on the table in some cases. And, I mean, there's still more for us to do.
Guest 1
We're we're so excited by how everyone else JS excited when they hear about this. Yeah.
Wes Bos
I bet.
Guest 1
It leaked internally to some teams, and they they were immediately, like, give it to us. Yeah. Right. Yeah. Yeah. Wes want the compiler. We're all
Scott Tolinski
Node. On the give it to us, like, when do you foresee timeline wise
Guest 2
this replacing TSC as we know it? It depends on which part of TSC. If you're just talking about the command line compiler, we're pretty we're pretty close.
Guest 2
And we Node it, for example, I mean, our existing our our Go compiler now compiles Versus Node with with no errors, and it checks everything.
Guest 2
I think that there's we we've scraped the top hundred repos.
Guest 2
We have to filter them on whether they use JS doc or JSX because that port is not complete yet. Okay. But so we've scraped the top hundred that that don't, and we're burning down all of those. None of them crash.
Guest 2
So they all compile, but the sets of errors they produce might not be exactly right. And so we're stamping that out. We're very close to being able to run all of our tests.
Guest 2
We have 50,000 tests that we've gradually been I mean, we're running 20,000 of them now, and producing what we call symbol baselines. But we are still working on our type printer, and that means we can't do type baselines exactly identically to how we we we did it before. But we can certainly get error baselines going and then start burning down that that work.
Guest 2
But we're close. We're really close. I'd I'd say we're, like, 80 to 90% done. And and and once we get there, then there will be a command line compiler that you could that you could use. But then there's the language service, of course, and then there are all of the then there's the entire tooling ecosystem that builds around our tools and Yeah. Where we have to, in some cases, come up with a whole new story for how that's gonna work. I mean, our our existing API, if you will, Yarn almost putting it in quotes because the API is effectively, hey. It's JavaScript. You can call anything you want. Yeah. Right? Yeah. Yeah. It's like we had a function. You wanted the function. Yeah. Where's the function? Right? You exported it to And when you can call anything you want, people call anything you want. Right? And that means I our entire implementation is effectively exposed. Right? And that's not gonna be the case anymore. Now we're gonna be in a separate process as a native exe that, you know, you could only talk to whatever we expose. Right? And so what do we expose, and how do we go about exposing that? Because it is also not it's not in the same process as you, it's in a separate process.
Guest 2
And you've Scott to marshal stuff. And so we got to think about batching instead of the high frequency Node invocations and so forth.
Guest 2
Plus Wes are taking this opportunity to reengineer our language service to use LSP as its native protocol, the language service server protocol that pretty much every language service but TypeScript uses because TypeScript actually predates LSP and was the inspiration for LSP, but but we've never done the work to actually adopt it. That we're doing now as well, which which makes us more embeddable in in in in a number of of of situations. But but that means there's some new work that we have to do there. So it's a long work stream, and it'll come in phases. Right? But, like, the first waypoint is that command line compiler replacement, and that's pretty close. If we're thinking about, like, concretely, like, actual time lines, we'd like to be able to have something that is
Guest 1
at rough parity of the command line compiler without build mode, meaning no project references Right. By, you know, sometime at the end of spring this year. You know, that's aspirational. Right? Like, we're we're gonna shoot for it. We can't guarantee things, but, like, we'd sure would like to try to aim for that. Yeah. Mhmm. And then, you know, build mode and a language server with a lot of, like, core functionality, if not most, we'd like to try to get out by towards the end of this year as well. Right? So, like, hey, no problem. Like, this this is what we write in every one of our plans. Right? It's it's the same then as it is now. We aspire to do these things, and we really wanna aim for that. But, hey, like, all indicators are pretty good on this so far. Good. So maybe
Wes Bos
could have it in Versus Node by Christmas, best case.
Guest 2
Maybe possible. I you know, we're not gonna make any promises. And I I I also I I I like I said, you know, the the command line compiler, it's very clear what needs to it just needs to function exactly the same way as the old command line compiler. Produce the same errors, do it the same way, have the same semantics and so forth.
Guest 2
The new language service may not be exactly Wes don't view it at the new API. We don't view it exactly the same. First of all, we have this, like, native gap that we have to bridge, right, and and and being a different executable and so forth. But there's also, in a sense, a historical gap.
Guest 2
We are now in the AI era.
Guest 2
And a bunch of things we do in the language service, you would do differently today if you had AI available from the get go. Like a bunch of refactorings dream up identifier names, the way you do things. I mean, it's just AI is maybe more suitable there. So a bunch of base level functionality will be the same, but a bunch of the more higher level refactorings and and quick fixes and what have you,
Wes Bos
we're gonna look at having them be more AI driven, because that is just the right thing to do at this point. Yeah. Like like, something as simple as, like, removing unused imports or something like that, is that maybe one that would be better for
Guest 2
I guess that's easy to analyze something. We we always have to involve the compiler in in semantically checking that what what happens is correct. Right? But Yep. I'm thinking more like like any refactoring you do that has to introduce new identifier names, you obviously want AI to look at the code and go, what do you think is a good name for this identifier? What do you use elsewhere? Let me use that. Right? And so forth. So so there there are things like that that we're we're we're just Gotcha. You Node? Yeah. Yeah.
Wes Bos
Does this open us up to any features in TypeScript itself that you possibly couldn't do before? I know, like, the the LSP and and whatnot. Is there is there anything else where, like, TypeScript will change because of this new compiler? I know it's it's such a mature language where, like, if you were to ask me, hey. What are we missing? Yeah. Yeah. Generics on form data, maybe. You know? Like, not nothing nothing, like, totally crazy.
Guest 1
Well, there there's a lot of analyses that we we've often said, well, that's expensive.
Guest 1
And, you know, I I'm not I don't know, Anders. Like, you you could probably answer this too. But Like like you say, it's a mature language now. We're more than a decade old. Right? We're and
Guest 2
and and we have millions and millions of users, and we're not just gonna go Node the boat just to rock the boat and do funny things just because it was interesting. I mean Mhmm. We're gonna obviously track the work that happens in in the ECMAScript standardization committee.
Guest 2
And whatever features get introduced there, we're gonna we're gonna make sure that we can type check them. But even that work stream is not moving at the same clip as it used to with Wes and ES6 and then and whatever back in back in the old days where lots of stuff was happening. Right? But there's this, like I keep coming back to, there's this new AI frontier that is, I think, super interesting. Like what can you do with ultra fast type checkers in the AI era? And how can we start to think even deeper about that, about providing more semantic products? Because that's, you know, the way I sometimes talk about it is that what is it that's going to keep AI honest when you're building agents in this agentic era that we're entering? Well, someone has to be able to check the semantics because, you know, only then do you know that it's verifiably correct, right? You can't you can't just hand AI the keys to the car and go, yeah, you go generate the code and then run it, and then, let's see what happens.
Guest 2
I mean, that could go horribly wrong. That probably will go horribly wrong. Right? You Scott have some form of verifiability and whatever, and what's gonna what's gonna do that, what's gonna keep AI honest is things like compilers and language services. So there's this deeper interplay there that I think is gonna become more and more important. And that's something we're gonna double down on a lot, I think. Awesome. Is there anything we didn't cover before we get into the last section here and wrap it up? Anything you wanted to say? Well, I I, you know, I hope everyone goes and tries the work, tries it on on on their code base as well, as it as it is ready for it. You Node? Once we complete features that that we're we're missing, and now it's now possible for you to use it for your own code, I really Node that people will will do that and then share their experiences with us. And and because, you know, there there will be there will be bugs and stuff that we need to fix in this. And it it certainly helps to have community feedback. We want that to be that conversation to be going as as soon as it can.
Guest 1
Currently, it is sort of, come and build it yourself.
Guest 1
We don't distribute it at the moment, but we think that the build step should be fairly easy on the whole if you're willing to get your hands a little dirty with the toolset.
Guest 1
But there are no sorts of exotic things you need to get installed. It's mostly go a command line tool called Hereby that we already used to build TypeScript. And then you should be able to run it right on your system, play around with the really, really early language service, extension that we have, and and just see how fast it builds. And then, hopefully, we'll make it a little bit easier in the future to to get your hands on it and try it out. Yeah.
Wes Bos
Awesome. I'm I'm really excited about it. So thank you so much.
Wes Bos
We'll we'll get into the last section real quick of of the podcast where we have, Sick Picks and Shameless Plugs.
Wes Bos
I don't know if you guys can prepared with, either of those.
Guest 1
For the sick pick that I have, it's a tool called, I guess, pprof, and and also a tool that we used in JavaScript that someone on our team, Jake on our team, built called pprofit, which is a really great profiling tool and visualizing tool for, leaner profiles.
Guest 1
You know, it's helped us optimize the job ship code Bos, the type job Vercel of the type of Node base. And it's also helped us find a lot of great wins as we've ported over the new code as Wes, because, you know, some allocation stuff is a little different here or whatever. But but it it has given like, told us, hey.
Guest 1
You're able to get rid of all the hot paths out of here, and now this thing is blazing fast. Right?
Wes Bos
This will help you, like, figure out what is causing performance problems
Guest 2
in your Node apps or JavaScript apps? No. It but just more in in our in our compiler implementation. We've used it in the in the the new command line compiler or the new language service. But yeah. It's a format, and then it's also a visualizer. And there's ways of getting it from JavaScript, so you can get a JavaScript profile.
Guest 1
You can also get it it was originally made for the Go community. Right? It's great as long as you've got something from your language toolset, and there's a pretty good one for JavaScript to get this. Cool. You know, a little anecdote there that
Guest 2
it's I've been writing code for a long time. And you always think you know Wes it comes to performance, you go, oh, yeah. Yeah. I know this. This is slow. This is this is what I need to optimize. And then you go optimize and nothing happens. And then you go, but but but but but but for surely did this. And then you run a profiler, and then it's like, woah.
Guest 2
Over there. What the heck's going on over there? Right? Yeah. That's actually where the bottleneck JS, and you it always surprises you. So profilers, in that sense, I think Yarn are like they're just this surprise generator.
Guest 2
But then once you listen to them, then kaboom, things start going faster.
Wes Bos
Anders, do you have a sick pick as well, or are you gonna piggyback on that one?
Guest 2
No. I'll piggyback on on the shameless plug there. But but the stick the sick pick, this is actually I don't know if it if it makes any sense, but my this is a pick that my wife came up with that we use now. We're giving these as as gifts to many friends, and they all love them. And they're like USB rechargeable hand warmers.
Guest 2
I know there's some stuff that you buy for Node. Npm Canadian. I know all about that. Yeah. Yeah. Yeah. They're awesome. I mean, instead of these chemical things that you have to shake and whatever, and God knows how good they are for the environment, right? You buy these like little Bos of soap, basically, that you recharge and then they go for hours and hours and they're awesome.
Guest 2
Friends that go, that's the best thing. I put them in my back pocket when I go for a walk, and I'm nice and warp the whole time.
Wes Bos
Yeah. They're I've had a couple friends recommend it because I have a heated vest that I wear often Yeah. Yeah. Yeah. Around the house or whatever. Yeah. Uh-huh.
Wes Bos
The whole, like, being able to produce heat with a battery is is amazing. Yeah. I know there's probably a limit to to how much, but it they're great. I should get a pair of these. Yeah. I usually just use my laptop on a video call.
Guest 2
That's right. Yeah. It's getting hot in here. Yeah. Yeah. I know. It's been a long one. Yeah.
Guest 1
There's also the shameless plug. I mean Yeah. I don't know, Anders. Like, is our shameless plug just the new code base? Of course. I mean, please, please go build it. Go go try it out on your Node. See what you think.
Guest 2
And, ultimately, we're gonna invite you to contribute as well.
Guest 2
Right now, though, given the velocity, we're probably not gonna take pull requests early on here because there's just so much going on already, and we just wanna get ported.
Guest 2
But this is gonna be an open source project just like the old TypeScript was. So yeah. Beautiful.
Wes Bos
Yeah. Alright. Well, thank you so much for all your work on on TypeScript over the Yarn, to both of you.
Wes Bos
I'm super excited about this project as well, and thank you for all your time coming on and explaining everything to us. Yeah. Thank you so much. Thanks for having us. It's been a pleasure.