0

Leading in the era of AI code intelligence

 6 months ago
source link: https://changelog.com/podcast/580
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Transcript

šŸ“ Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. šŸŽ§

So Quinn, itā€™s good to see you, good to have you backā€¦ I want to really talk about the evolution of the platform, because the last time we talked it was kind of like almost pre-intelligenceā€¦ It was kind of almost still search. And like just after that you went buck wild and had a bunch of stuff happeningā€¦ And now obviously a whole new paradigm, which is artificial intelligence, a.k.a. AIā€¦ But good to have you back, good to see you. How have you been?

Yeah, itā€™s great to be back. Iā€™ve been good. I think, like everyone, itā€™s been quite a whirlwind over the last four years, over the last year with AIā€¦ And weā€™ve come a long way. We talked two years ago, and we talked a lot about code searchā€¦

Was it two years ago?

Two years ago.

So a lot changed in two years.

Yeah, thereā€™s been about 10 years in the last two years, through the pandemic, and now AIā€¦ And we have grown a ton as a company, our customer base, and all thatā€¦ And yeah, two years ago we were talking about code search, and thatā€™s what we had built, and we still have code search that looks across all the code, and understands all the calls, the definitions, the references, the code graph, and all those great things. And weā€™ve got a lot of great customers on that. You know, much of the FAANG companies, and four of the top ten US banks, and Uber, and Databricks, and governments, and Atlassian, and Reddit, and so on.

[00:05:56.12] But itā€™s not just code search anymore. The world has changed for software development in the last year so much.

What is it like being CEO of a company like that? I mean, youā€™re a founding CEO, so this isnā€™t like ā€œOh, you inherited this awesomeness.ā€ You built this awesomeness. How does it feel to be where you are?

Sometimes it is exciting and scary to realize that I as CEO have to make some of these decisions to go and build a new product, to change our direction. And I feel so lucky that I have an amazing team, and that people are on board, and people are bringing these ideas and need to change upā€¦ But itā€™s definitely weird. I mean, itā€™s one thing to try a new side project that is dabbling with some of these new LLMs; itā€™s another thing to shift a 200-person company to going and building that. But as soon as you do that, just the progress that you see - I mean, itā€™s so validating. And then obviously, hearing from users and customers, itā€™s so validatingā€¦ So itā€™s been I think a whirlwind for everybody.

When you make choices like this, do you have to go to ā€“ I know youā€™re funded, so you probably have a board. So you have other people who help you guide the ship. So itā€™s not ā€œHey, Iā€™m CEO, and we just do whatever I wantā€¦ā€ You also have Beyang Liu, your founding co-founder, CTO. Iā€™m very keen on Beyang, Iā€™ve known him for years. I want to go backtrack a little bit, but I want to stay here for a minute or two. When you make a choice to go from code search to code intelligence, to now introducing Cody, your product for code gen, for ā€“ and code understanding as well. I mean, itā€™s so much more. Itā€™s got a lot of potential. When you make a choice like that, how do you do that? Do you have to go to the board? Whatā€™s that, like? Give me an example of what it takes to sort of change the direction of the ship, so to speak.

Yeah. If you go back to the very founding of Sourcegraph, we decided on a problem to solve, which is big code. It was way too damn hard to build software. Thereā€™s so much code, thereā€™s so many devs, thereā€™s so much complexityā€¦ And back when we started Sourcegraph 10 years ago, we felt that, you know [unintelligible 00:08:05.01] companies felt thatā€¦ Now you start a brand new project and it brings in like 2,000 dependencies, and you have to build on all these super-complex platformā€¦ Stuff is getting so much more complex. So we agreed on a problem, and we got our investors, we got our board, we got our customers, our users, our team members all aligned around solving that problem. And not one particular way that we solve it. And if you go back to our plan, actually, in our very first seed funding deck, it talks about how first we want to build the structured code graph. And then we want to do Intelligent Automation. Thatā€™s IA. I think we probably would have said AI, except back at the time if you said AI, people thought that you were completely crazy.

You know, itā€™s unfolding ā€“ I wonā€™t say exactly; we didnā€™t have a crystal ball back then. But itā€™s unfolding roughly as we expected. And we knew that to do more automation and code, to take away that grunt work from developers, so they could focus on actual problems and the stuff they love, that you needed to have the computer have a deep understanding of the codebase. It couldnā€™t just all be in devsā€™ heads. So it was no big surprise to our board or our team that this was something that we would do, that we would go and build our code AI. And it was also not some complete luck of the draw that we found ourselves in a really good position to go and build the most powerful and accurate code AI. So none of this is coincidental. But when do we do that? And I think if we had started to do that say in 2018, where there were plenty of other attempts to do ML on code, I think that we would have failed, because the fundamental underlying technology of LLMs was just not good enough back then.

[00:09:57.16] And the danger there is if we had started doing it in 2018 and we failed, we might have as an organization learned that stuff doesnā€™t work. And then we would have been behind the ball when it actually did start to work. So getting the timing right was the tough part. And I actually think that we probably waited too long, because we could have brought something to market even sooner. But itā€™s still so early in terms of adoption of code AI, and even less than 0.5% of GitHub users are using GitHub Copilot, so itā€™s still very early. Most devs are not using even the most basic code AI that exists. So itā€™s still very early.

But getting the timing right, and making a big shift, starting back last Decemberā€¦ That was when we started to see Cody really start to do some interesting things. That felt early to a lot of people on the team. And it took a lot of work to get the team on board, and to champion what the team was doing that was working, and to shift some of the things that we could see were not going to be as important going forward.

What a shame though, right? ā€¦that less people are using code AI-related tooling. I think itā€™s like a ā€“ Iā€™m not sure what it is exactly happening, because thereā€™s this idea that it might replace me, and so therefore I just resist it. And Iā€™m just assuming thatā€™s probably some of the case for devs out thereā€¦ Because Iā€™ve used it, and I think itā€™s super-magical. And Iā€™m not trying to generate all the things, Iā€™m trying to move faster, with one more point of clarity, if not an infinite point of clarity that can try something 1,000 times in five minutes for me, so I donā€™t have to try it 1,000 times in a week or two. Whatever it might be. And that might be hyperbole to some degree, but itā€™s pretty possible. I just wonder, why are people not using this more frequently? Is it accessibility? Is the access not evenly distributed? What do you think is happening out there? Whatā€™s the sentiment out there of why more tooling like this isnā€™t being used?

Well, I think this applies even to ChatGPT. ChatGPT ā€“ itā€™s amazing. It changed the world. Itā€™s mind-blowing. It can do incredible things. And yet, you ask the average person how often are they using ChatGPT in their everyday life or their everyday workweek, and the answers I usually get are ā€œMaybe one or two times.ā€ You hear the stories of people that say ā€œI ask it to write emails for me. And what it writes is ten times too long.ā€ And the technology is there, the promise is there, but in terms of actually something that is so good, and understands what someone is doing, understands the code they need to write for developers - thatā€™s still not completely there yet. And at the same time, the promise is there.

So I really want to make sure that we as an industry, everyone building Code AI, that we level with developers out there, with what exists today, what works well today, what doesnā€™t, whatā€™s coming in the future, and not lose credibility by overhyping this whole space. I think thatā€™s a huge risk. And I actually look at self-driving cars. 10-15 years ago you started to hear about autonomous vehicles; there was so much hype. People thought ā€œAre humans even gonna be driving in the year 2020?ā€

Clearly, we areā€¦ And some people are kind of jaded by all that hype, and they just dismiss the entire space. And yet, in San Francisco here, thereā€™s two companies where you can get a self driving taxi. And that is amazing. That is mind-blowing. The progress is real. It was a little slower than people thought if you were just reading the hype, but I think that most of the industry experts would have said ā€œYeah, this is about the rate of progress that weā€™d expect.ā€

So we donā€™t want that kind of mistake to happen with code AI. We want to be really clear that itā€™s not replacing developers. Those tweets you see where itā€™s like ā€œOh, you fire all your junior developers. You can replace them withā€, whatever, this AI tool someone is shilling. Those are completely false, and those detract from the incredible progress that weā€™re seeing every single day with code AI getting better and better.

[00:14:20.18] The autocomplete that code AI can do is really powerful. I think that could probably lead to a 20% boost to developer productivity, which is really meaningful. But then having it write entire files, having it explain code, understand codeā€¦ Weā€™re working on that with Cody, and Cody does a pretty good job of that. Itā€™s really helpful. And you see a lot of other work there. That is really valuable. And it doesnā€™t need to be at the point where, you know, itā€™s Skynet for it to be changing the world.

Yeah, for sure. Can we talk about some subtle human leveling up thatā€™s practical for ChatGPT? I mean, I know itā€™s not Cody. Do you mind riffing a little bit? So last night my wife and I, we were hanging up pictures of our beautiful children. We took pictures of them when they were less than one week old, and then we have pictures of them in the same kind of frame at like their current ages, and oneā€™s seven and oneā€™s three. So it doesnā€™t really matter about the age. Theyā€™re just not one week old anymore. So you have this sort of brand new version of them, and then like current version, to some degree. And itā€™s four pictures, because we have two sons, and we want to hang them on the wall. And my wife was trying to do the math - and we can obviously do math; itā€™s not much. Itā€™s like an eight foot wide wall; we want to put them in a [unintelligible 00:15:29.05] with even spacing, all that good stuff. Iā€™m like ā€œBabe, we should askā€“ā€ And like, Iā€™m more versed in this than she is; not so much that she doesnā€™t use it often, she just doesnā€™t think to. And I think that might be the case of why itā€™s not being more widely used, is they donā€™t think to use this. And Iā€™m like ā€œI donā€™t want to use it. Itā€™s a word calculator.ā€ Or ā€œI want to use it, itā€™s a word calculator.ā€ I donā€™t want to think about this problem myself. I donā€™t want to do the math. I could just tell it the problem; my space, my requirements, and it will just ā€“ it will tell me probably too much, but it will give me a pretty accurate answer. Iā€™m like ā€œLetā€™s just try itā€, and sheā€™s like ā€œOkay.ā€ And so I tell it ā€œHey, ChatGPT, I have this eight foot, five inch wall in width, and I want to have these pictures laid out in a grid. Theyā€™re 26 inches squared, 26 wide, 26 tall, and I want to have them evenly distributed on this wall in a four grid.ā€

It gave me the exact answer, told me exactly where to put them at. We did it in five minutes, rather than like doing the math, and making a template, and writing all these things on the wall. It was so easy, because it gave us the exact right answer. Thatā€™s cool.

Thatā€™s awesome.

That to me is like the most uniquely subtle human way to level up. And I think thereā€™s those kinds of problems in software that are being missed by developers every single day, to not X their day. And what I mean by that is like 1x, 2x, 5x, whatever it might be; if I can do a task in five minutes, not because it does that for me, but it helps me think faster and get to the solution faster. Then I want to do that, versus doing it in 15 minutes, or an hour or so. What do you think about that?

Yeah. So when you asked it that, did it give you the exact right answer on the very first reply?

Yes. Yes, it did.

Thatā€™s awesome.

Yeah. Iā€™ve found a way to talk to it that it does that. And I donā€™t know if itā€™s like a me thing, but I get pretty consistently accurate answers. Now, it also gave me all the theory in a way, too. ā€œThe combined width of this and that, and two times this, and whatever thatā€, I donā€™t really care. I just want to know the end, which is says ā€œSo if you want six inches in between each picture frame, you should do this and that and this and that.ā€ Like, it gave me the ending; just skip to the ending. Just give me the good parts.

But Iā€™m willing to like just wait; literally, maybe 10 seconds extra. Thatā€™s cool with me.

Yeah. Well, thatā€™s incredible. And I think that thereā€™s probably ā€“

Isnā€™t that incredible?

[00:17:52.05] Yeah. Thereā€™s so many things like that in your everyday life where you could use it. And it probably wonā€™t get it 100% correct, but I mean, what an amazing time to be living, where that new technology is suddenly possible. And itā€™s not trickled down to all the things that it can change. And when you think about that underlying capability, this kind of brain that can come up with an answer to that question, how do we make it so that it can do more code? The way that a lot of people think about code AI is autocomplete the next line, or few lines. And thatā€™s a really good problem for AI, because just like with your picture framing example, the human is in the loop. The human is reviewing a digestible, reviewable amount of code, of AI-suggested code. And so youā€™re never having to do things that the human cannot look at. If the AI told you ā€œHey, if you want to put pictures up on the wall, first crack some eggs, and put them on the stove.ā€ Youā€™d be like ā€œThat makes no sense.ā€ And you would have caught it.

So that human in the loop is really important. The next step though, and how we get AI beyond just a 20% productivity enhancement is ā€œHow do we have the AI check its own work?ā€ And I donā€™t mean the LLM, I mean how do we have an AI system? One very simple example is right now any AI autocomplete tool will sometimes suggest code that does not type check, or does not compile. Why is that? That should no longer be the case. Thatā€™s one of the things that weā€™re working on with Cody. So donā€™t even suggest code that wonā€™t type check. How can you bring in context about the available types in the type system so that it will produce a better suggestion, and then filter any suggestions that would not type-check. And in some cases, then go back to the LLM, invoke it again, with additional constraints. And you know, then why stop at type checking? Letā€™s make it so you only suggest code where the tests pass; or you suggest code where the test donā€™t pass, but then you also suggest an update to the tests, because sometimes the tests arenā€™t right. And itā€™s all about all the advances in the future with code AI that I think are critical for us to make it so amazingly valuable are about having the AI check the work and bringing it to real world intuition, so itā€™s not relying on that human in the loop.

Yeah. I guess my concern would be latency, right? Like, if youā€™ve got to add, not just generation, but then checking, linting, etc, testing, correctly testing, canceling outā€¦ Like, youā€™ve got a lot more in that buffer between the prompt, which weā€™re all familiar with, to get that response, and the ending of the response. I always wonder, why does it take ChatGPT in particular time to generate my answer? Is it really thinking and itā€™s giving me like the stream of data on the fly? Or is there some sort of ā€“ is that an interface thatā€™s part of usability, or part of UX? And I just wonder, in that scenario that you gave, would the latency affect the user experience?

Yeah, absolutely.

Of course, right?

Yeah. We have incredibly tight latency budgets. We look at getting the 75th percentile latency well below 900 milliseconds. And once you start invoking the LLM multiple times to check its own work, to go back and redo the work, once you start invoking linters, and type checkersā€¦ I think weā€™ve all been in a situation where we hit Save in a file in our editor, and we see ā€œOh, waiting for the linter to complete.ā€ Sometimes that can take a few seconds in big projects. So this requires I think a rethinking of a lot of the dev tooling. Because in the past, it was built for this ā€œHuman is editing a single file at a timeā€, itā€™s interactive, and itā€™s in CIā€¦ But thatā€™s where latency is not that sensitive. But I look at just the difference between like Bun running tests in a JavaScript project, versus another test runnerā€¦ And bringing that down to 200-300 milliseconds instead of 5 or 10 seconds or more is really critical. I look at things like Ruff, rewriting a Python linter in Rust to make it go so much faster. I mean, I wish something like that existed for ESLint. And we need to bring the latency of all these tools that devs use in that edit loop down by several orders of magnitude to make this possible. But I think the reward, the pot of gold at the end of the rainbow if we do all of that is so great, because it will enable AI to take off so much of the grunt work that we ourselves do. So I donā€™t know if thatā€™s the motivation behind some of these linters and new test runners and so on, but I love that those are coming out there, because that will make this fundamentally possible.

So recently, at All Things Open, Jerod conducted a panel with Emily Freeman and James Q. Quickā€¦ And really, one of the questions he asked was ā€“ you call it grunt work in this scenario, and Jerod argued that maybe thatā€™s the joy work. Does AI steal the joy work, Quinn? Some of this stuff is fun, and some of it is just a means to an end. Like, not all developers really enjoy writing the full function themselves. And some of them really do, because they find coding joy. What are we doing here, are we stealing the joy?

I love nothing more than having six hours of flow time to fix some tech debt, to do a really nice refactorā€¦ And as CEO, sometimes thatā€™s the best code for me to be writing, because I do love coding, rather than some new feature, some new production codeā€¦ So yeah, I totally feel that. And at the same time, I choose to do that by writing in TypeScript, by using a GUI editor, Emacs or VS Code - I choose to do that by writing in Go. Iā€™m not choosing to do that by going in and tweaking the assembly code, orā€¦ You know, weā€™re not using C. So Iā€™ve already chosen a lot of convenience and quality of life improvements when I do work on tech debt. Itā€™s not clear to me that the current level is exactly right. I think that you can still have a lot of the really rewarding puzzle-solving, the fun parts of the grunt work, and have the AI do the actual grunt of the grunt work. And I think itā€™s different for everyoneā€¦ But as we get toward AI starting to ā€“ and to be clear, itā€™s not here yet. But as we work as an industry toward AI being able to take over more entire programming tasks, like build a new feature, then weā€™re gonna have to do both the grunt work and the fun work from the programmer. And if someone only wants to use half of that, thatā€™s totally fine. My co-founder Beyang - he uses Emacs, but in a terminal, not in GUI. So itā€™s a free country, and devs can choose what they want.

Thatā€™s right. Okay. I guess I was saying that more as a caution to you, because half of the audience cringe when you said grunt work, and the other half was like ā€œYouā€™re taking my joy away.ā€ Some of them are happy, and then some of them are like ā€œLetā€™s not use a pejorative towards the work we love.ā€ You know what I mean?

Well, I think grunt work is different for each person. I think a lot of people would consider the grunt work to be all the meetings, and all the reading of documents, and the back and forth, and the bureaucracy of their jobā€¦

They hate that part. And they just love coding. And I say we need it, we need AI in the future to be able to digest all that information that comes from all these meetings, and to distill the requirements. So let the AI do that for them, and then they can just have a glorious time coding. And we used to joke with Sourcegraph that we hoped that Beyang and I would create Sourcegraph, itā€™d be so damn good that we could just retire and spend all our day coding in some caveā€¦ And look, I totally feel that, and we want to bring that to everyone. And if they want to do that, then they should be able to do that.

Yeah. So two years ago we would not have opened this conversation up with a discussion on artificial intelligence. Two years ago you were talking about like that was the last time you did the work, not me. I didnā€™t even look at the last time we talked. I knew it was not yesterday, and it was not last year. I just wasnā€™t sure how far back it was. What has changed with Sourcegraph since then? I mean, youā€™ve grown obviously as a company, youā€™ve got two new pillars that you stand on as a companyā€¦ Code search was the origination of the product, and then you sort of evolved that into more of an intelligence platform, which I think is super-wiseā€¦ And then obviously, Cody, and cogeneration, and code understanding, and artificial intelligence, LLMs, all the good stuff. What has changed really, from a company level? What size were you back then? Can you share any attributes about the company? How many of these FAANG and large enterprise customers did you have then versus now? Did they all come for the Cody and stayed for the Sourcegraph, or was it all one big meatball? How do you describe this change, this diff?

[00:30:02.09] Yeah, two years ago we were code search. And thatā€™s like a Google for all the code in your company. Itā€™s something that you can use while coding to see how did someone else do this, or why is this code broken? How does this work? You can go to find references, go to definition across all the codeā€¦ And at the time, we were starting to introduce more kinds of intelligence, more capabilities there. So not just finding the code, but also fixing the code with batch changes, with code insights so you could see the trends. For example, if youā€™re trying to get rid of some database in your application, you could see a graph where the number of calls to that database is going down, and hopefully, the new way of doing things is going up. So all these other kinds of intelligence. And that stuff is incredibly valuable. Millions and millions of devs love code search and all these things. And with code search, that was about feeding that information to the human brain, which is really valuable. And the analogy that I would say is Chat GPT, again, changed the world, but we all use Google search, or whatever search you use, way more than you use ChatGPT today. And yet, everyone has a sense that something like ChatGPT, that kind of magic pixie dust will be sprinkled on search, and weā€™ll all be using something thatā€™s kind of in between. ChatGPT is probably not the exact form factor of what weā€™ll be using. Google Search circa two years ago was not what weā€™ll be using. But thereā€™ll be some kind of merger. And thatā€™s this journey that weā€™ve been on over the last couple years, taking code search, which fundamentally builds this deep understanding of all the code in your organization, and weā€™ve got a lot of reps under our belts by making it so that humans find that information usefulā€¦ Now, how do we make that information useful to the AI, and then make that AI ultimately useful to the human? So how can we use this deep understanding of code to have Cody, our code AI, do much better autocomplete, that has higher accuracy than any other tool out there? How can we have it use that understanding of how you write tests throughout your codebase, so that it will write a better new test for you using your framework, your conventions. How do we make it really good at explaining code? Because it can search through the entire codebase to find the 15 or 20 relevant code files.

So weā€™re building on this foundation of code searchā€¦ And what Iā€™ll say with code search is I use it all the time. I think every dev would do well to use code search more. Itā€™s so good at finding examples. Reading code is the best way to uplevel as a software engineerā€¦ But Cody and code AI is something that every dev thinks that they should be using. So given that they solve so many of the same problems, this problem that caused us to found a company; if itā€™s so damn hard to build software, itā€™s really hard to understand code. They both solve the same problem. And if what people want is Cody, more than code search - well, code search still exists, and itā€™s growing, and itā€™s the foundation for Codyā€¦ But weā€™re going to be talking about Cody all day, because thatā€™s what people are asking for. And thatā€™s what we hear from our users. We see a lot of people come in for Cody, and then they also realize they love code searchā€¦ But I think Cody is going to be the door in. Itā€™s so easy to get started, and it is just frankly magical. I think everyone can speak to that magic that they see when AI solves their problem. Like you did with that picture frame example.

Yeah. Can you speak to the ease of which it was to sell Sourcegraph, the platform two years ago, to how easy it is to sell it now? You kind of alluded to it to some degree, but can you be more specific?

Yeah. Two years ago would have been 2021, the end of 2021, which was the peak of the market; the peak of kind of everything. And I think thereā€™s been a lot of big changes in how companies are hiring software engineers, and budget cuts, and so on. So weā€™ve seen a big change over the last two years. Code search has grown by many, many times since thenā€¦

[00:34:01.11] But what we saw is with companies realizing ā€œHey, maybe weā€™re not going to be growing our engineering team at 50% each year.ā€ We saw a lot of kind of developer platform, developer happiness, developer experience initiatives get paused in favor of cost cutting. ā€œHow can we figure out what are the five dev tools that we truly need, instead of the 25? Where in the past, if a dev loved something, then yeah, weā€™d go in and plop down a bunch of money.ā€

And so we were well positioned because we had such broad usageā€¦ And because a lot of companies looked at us as a platform, they built stuff against our API, and every team used it, we were in a good position there. I think though, if AI had not come out about a year ago, then I donā€™t know what the DevStack would look like. I think youā€™d have a lot of companies that realized ā€œHey, weā€™ve been keeping our eng hiring really low for the last two yearsā€¦ā€ Iā€™m not sure now ā€“ companies see AI As a way to get as much as they were getting in the past, but with less developers. And developers see it as a way to improve their productivity. And I think the missing piece that weā€™re not fully seeing yet is thereā€™s a lot of companies out there that would love to build more software, but were just unable to, because they didnā€™t know how to, they were not able to hire a critical mass of software engineers, they were not in some of the key engineering hiring markets, developers were too expensive for them to hireā€¦ But all these other companies that would have loved to build software, they were just bottlenecked on not being able to find the right engineers. I think that AI is going to help them overcome that, and youā€™re gonna see software development be much more broadly distributed around a lot of companies. And that is whatā€™s exciting.

So looking at the overall software developer market, around 50 million professional developers, around 100 million people, they write code in some way in their job, including like data analysts. I fully expect that number to go up, and I fully expect that pretty much every knowledge worker in the future is gonna be writing some code in some way. So Iā€™m not pessimistic on the value of learning how to code at allā€¦ But thereā€™s just been massive change in how companies are seeing software development and the structure of teams over the last couple of years.

I think when we talked last time you were saying, either exactly, or in a paraphrasing way, that it was challenging to sell code search. That it was not the most intuitive thing to offer folks. You obviously, founders, understand how deeply it was useful, because you worked inside of Google, you saw a different lens towards code searchā€¦ And most people just saw Command+F, or even Command+Shift+F as just something that was built in, rather than something that you went and bought, and stood up separately as a separate instance, that had this other intelligence. And that was hard to sell. However, code search that is being understood by an LLM, Cody, is a lot easier to offer, because you can speak to it. Very much like weā€™ve learned how to chat with artificial intelligence to generate and whatnot like that.

So Iā€™m curious, even when we were done talking on the last time on Founders Talk, you werenā€™t ready to share this intelligence side, which was also the next paradigm. I think this intelligence factor - obviously, code search gives you intelligence, because you can find and understand moreā€¦ But it was the way that you built out insights and just different things like that, that allowed you to not only manually, like a caveman or cave person type in all these things you can into search; you could just sort of form an intuitive graph towards, like youā€™d mentioned before, the calls to a database going down, and calls to the new database going up, and you can see the trend line towards progress. Clearly. And even share that dashboard with folks who are not in development, in engineering. Sharing with comms, or marketing, or CEOs, or whomever is just not daily involved in the engineering of their products. And Iā€™m just curiousā€¦ Give me really specifics, like how easy it is to sell now because Cody just makes the accessibility, the understandability of what Sourcegraph really wanted to deliver so much easier?

[00:38:11.25] Yeah, Cody does make it so much easier. And yeah, going back two years ago, we had a fork in the road. We could have either made just code search, something that clicked with so many more developers, and overcome that kind of question which is ā€œYou know, Iā€™ve been coding for 10 years. I havenā€™t had code search. I have it in my editor. Why would I need to search across multiple repositories? Why would I need to look through different branches? Why would I need kind of global [unintelligible 00:38:39.15] definition? Why would I need regex search that works?ā€ We got a lot of questions like that. We could have just doubled down on that and tried to get, for us, way more devs using it for open source code, and within our customers 100% of every developer, and all of our customers using code search. We could have done that. What we decided to do was go deeper into the intelligence, to build things that were exposed as more power user tools, like the code insights. Code Insights is something that platform teams, that architects, and security teams, managers - they love, it has incredible value for them, but for the average application engineer theyā€™re not really looking at code insights, because theyā€™re not planning these big, codebase-wide refactors. Same with batch changes. Platform teams love it, people that have to think in terms of the entire codebase, rather than just their feature, they need it. And I think we got lucky, because given that right around that time, thatā€™s when developer hiring began to really slow down. It was really helpful for us to get some really deep footholds in these critical decision-makers, just from a sales point of view, in companies, to have like very deep value, instead of kind of broad, diffused value.

So that ended up being right. It also ended up being right in another way, which is we got deeper in terms of what does Sourcegraph know about your codebase? And that was valuable for those humans over the last couple of years, but itā€™s also incredibly valuable now, because we have that kind of context that can make our code AI smarter. But I do really lament that most devs are not using code search today. I think itā€™s something that would make them much better developers, and thereā€™s absolutely a part of me that wishes I could just go have 50 amazing engineers here work on just making it so that code search was so damn easy to use, and solved every developerā€™s problem. Now weā€™re tackling that with Cody, because weā€™ve got to stay focusedā€¦ And to your point, they do solve the same problem. And with code search, if youā€™re trying to find out ā€œHow do I do this thing in code?ā€, code search will help you find how all of your other colleagues did it. Cody will just look at all those examples and then synthesize the code for you. And so thereā€™s so much similarityā€¦ And we are just finding that Cody is so much easier to sell.

But we did have a cautionary moment that I think a lot of other companies did. Back in February to May of 2023 this year, if you said AI, if you said ā€œOur product has AIā€, literally everyone would fall over wanting to talk to you, and theyā€™d say ā€œMy CEO has given me a directive that we must buy AI. We have this big budget, and security is done, legal is done, we have no concerns. We want it as soon as possible.ā€ And it didnā€™t matter if the product wasnā€™t actually good. People just wanted AI. And that I think created a lot of distortions in the market. I think a lot of product teams were misled by that. Iā€™m not saying that the customers did anything wrong. I think we were all in this incredible excitement. And we realized that we didnā€™t want to get carried away with that. We wanted to do the more boring work, the work of ā€œTake the metric of accuracy, and DAUs, and engagement, and overall a lovable product, and just focus on that.ā€ We did not want to go and be spinning up the hype.

[00:42:04.06] So we actually really pulled back some of this stuff and we level-set with some customers that we felt wanted something that nobody could deliver. And that was one of the ways that we came up with these levels of code AI taking inspiration from self-driving cars. We didnā€™t want the hype to make it so that a year from now everyone would become disillusioned with the entire space. So definitely a big learning moment for us. And if thereā€™s an AI company out there that is not looking at those key user metrics that have always mattered, the DAU, the engagement, the retention, the quality, then youā€™re gonna be in for a rude awakening at some point, because exploratory budgets from customers will dry up.

Well said. I think itā€™s a right place, at the right time, really. I would say the right insight a long time ago to get to the right place, to be at the right place, at the right time. Because everything that is Cody is built on the thing you said you lament that developers would use; itā€™s built on all the graph and all the intelligence thatā€™s built by the ability to even offer code search, at the speed that you offer it. And then obviously, your insights on top of that. So itā€™s like, you took ā€“ itā€™s like having the best engine and putting it in the wrong car, and nobody wants to buy the carā€¦ And then suddenly, you find like this shell that performs differently, maybe itā€™s got better ā€“ I donā€™t know, just in all ways it feels better to use, and itā€™s more just straightforward to use; you still have the same engine, itā€™s still the same code search, but itā€™s now powered by something that you can interact with in a meaningful way, like weā€™ve learned to use with having a humanistic conversation with software running on a machine.

I think thatā€™s just such a crazy thing to be ā€“ thatā€™s why I wanted to talk to you about this, because youā€™ve hadā€¦ I mean, some people think that Sourcegraph was born a year or two ago, that know your name. And youā€™ve been like on a decade journey. I donā€™t even know what your number is; itā€™s getting close to a decade, if not past a decade, right?

Yeah. We started Sourcegraph a decade ago.

And so Iā€™ve been a fan of yā€™alls ever since then. And for a long time, just a fan hoping that you would get to the right place, because you provided such great value, that was just hard to extract, right? The ability to extract the value from Sourcegraph is easier thanks to Cody than it was through code search, because of obvious things we just talked about. Thatā€™s an interesting paradigm to be in, a shift to be in, because youā€™re experiencing that, Iā€™m assuming, to some degree, a hockey stick-like growth, as a result of the challenges you faced earlier, that now are diminished to some degree, if not all degrees, because of the ease of use that Cody and things like Cody are.

Yeah. And code search, when we started bringing that to market in 2019, that was a hockey stick. But now we realized that was a little league hockey stick, and that now this is the real hockey stick.

And Iā€™ve been reading ā€“ I love reading history of economics, and inventions, and so onā€¦ And Iā€™ve been reading about the oil industry. The oil industry got started when someone realized ā€œOh, thereā€™s oil in the ground, and this kerosene can actually light our homes much better and much more cheaply than other kinds of oil, from whales, for example.ā€ And initially, oil was all about illumination. Make it so that humans can stay up after 6pm when the sun goes down. And that was amazing. But thatā€™s not how we use oil today. Oil is just this energy that powers everything; that powers transportation, that powers manufacturing, that powers heating, and so on. And there were people that made fortunes on illumination oil, but that pales in comparison to the much better use of oil for our everyday lives. And now, of course, you have renewables, and you have non-oil energy sourcesā€¦ But for a long time, we saw that that initial way of using oil was actually not the most valuable way.

[00:46:14.09] So seeing that this just happens over and over, that a new technology is introduced and youā€™re not quite sure how to use it, but you know that itā€™s probably going to lead to somethingā€¦ And thatā€™s how we always felt with code intelligence, and thatā€™s ā€“ us getting new Intelligent Automation is so exciting for us now.

One of the really exciting things weā€™re seeing is because ā€“ so many people are shocked that these LLMs, you speak to them humans. They seem to feel much more human-like than what we perhaps anticipated AI would be like. We think of AI from movies as being very robotic, of lacking the ability to display empathy, and emotion, and thought processes. But actually, that is exactly how we see LLMs. Iā€™ve seen some studies even the show that LLMs can be better at empathy than a doctor with a poor bedside manner, for example. And for us, this is absolutely critical, because all this work we put into bringing information about code to the human brain - it turns out that AI needs that same information. That AI - well, the human, if you started a new software engineering job, you get your employee badge, you go read through the code, read through the docs, if thereā€™s an error message youā€™ll look at the logs, youā€™ll go in team chat, youā€™ll join meetingsā€¦ Thatā€™s how humans get that information. And AI needs all that same information. But the problem is, you cannot give AI an employee badge and have them roam around the halls and stand at the watercooler. Thatā€™s just not how AI works.

So we just happen to have broken down all that information into how can we think of it programmatically. And now thatā€™s how we teach it to Cody.

I always throw the word yet in there whenever I talk about status quo with artificial intelligence or innovationā€¦ Because my son - heā€™s three; he loves to watch ā€œthe robot dance videoā€, he calls it. It was Boston Dynamics, that ā€œDo you love meā€ song. And they have all the robots dancing to it. And Iā€™m just thinking like ā€œWhen is the day when itā€™s more affordable, or to some degree more affordable to produce that kind of humanoid-like thing that can perform operations?ā€ Now, I know itā€™s probably not advantageous to buy an expensive Boston Dynamics robot to stand at your water cooler. But thatā€™s today. What if 50 years from now itā€™s far more affordable to produce those, and theyā€™re en mass produced with the things that are completely separate from it in the future? Maybe it might make sense eventually to have this water cooler-like scenario where youā€™ve got a robot thatā€™s the thing that youā€™re talking to. Iā€™m just saying. Thatā€™s why I said the word yet.

Yeah, yeahā€¦ And youā€™ve got to have this humility, because who knowsā€¦?

Okay, so letā€™s talk about some about winning. Can we talk about winning for a bit? So if youā€™re on this little league hockey stick with search, and then now itā€™s obviously major league hockey stick - I think your head-nodding to that to some degree, if not voicingly affirming thatā€¦

When I search ā€œGitHub Copilot versusā€, because I think Copilot has a brand name because they were one of the first AI code-focused tools out there. Now, obviously ChatGPT broke the mold and became the mainstream thing that a lot of people know aboutā€¦ Itā€™s not built into editors directly. It might be through GitHub Copilot and Copilot Xā€¦ But even when I search ā€œGitHub Copilot Xā€ or just Copilot by itself versus, Cody does not come up in the list. Tabnine does, and even VS Code doesā€¦ And that might be biased to my Google search. And this is an example where Iā€™m using Google versus ChatGPT to give me this versus. Now, I might query ChatGPT and say ā€œOkay, who competes with GitHub Copilot?ā€ And you might be in that list. I didnā€™t do that exercise. What Iā€™m getting at is, of the lay of the land of code AI tooling, are you winning? Who is winning? How has it been compared? What are the differences between them all?

Yeah, Copilot deserves a ton of credit for being the first really good code AI tool, in many waysā€¦ And I think at this point itā€™s very early. So just to put some numbers to that, GitHub itself has about 100 million monthly active users, and according to one of GitHubā€™s published research reports - thatā€™s where I got that 0.5% number from - they have about a million yearly active users. And thatā€™s the people that are getting suggestions, not necessarily accepting that even. So a million yearly actives - what does that translate into in terms of monthly actives? Thatā€™s a tiny fraction of their overall usage. Itā€™s a tiny fraction of the number of software developers out there in the world. So I think itā€™s still very early. And for us, for other code AI tools out there, I think people are taking a lot of different approaches. Thereā€™s some that are saying ā€œWeā€™re just gonna do the cheapest, simplest autocomplete possibleā€, and thereā€™s some that are saying weā€™re gonna get jumped straight to trying to build an agent that can replace a junior developer, for example. I think that youā€™re seeing a ton of experimentation. What we have, which is unique, is this deep understanding of the code. This context. And another thing that we have is we have a ton of customers, where Sourcegraph is rolled out over all of their code. And working with those customers - I mean, I mentioned some of the names beforeā€¦ These are customers that are absolutely on the forefront, that want this code AI, and itā€™s a goldmine for us to be able to work with them.

So when you look at whatā€™s our focus, itā€™s how do we build the very best code AI that actually solves their problem? How do we actually get to the point where the accuracy is incredibly high? ā€¦and we see Cody having the highest accuracy of any code AI tool based on completion acceptance rate. How do we get to the point where every developer at those companies is using Cody? And thatā€™s another thing where weā€™ve seen ā€“ thereā€™s a lot of companies where, yeah, theyā€™re starting to use code AI, and five devs over here use Copilot, five over here use something elseā€¦ But none of this has the impact that we all want it to have until every dev is using it. As we learn with code search, itā€™s so important to make something that every dev will get value from, that will work for every dev, that will work with all their editors, that will work with other languages. And thatā€™s the work that weā€™re doing now.

[00:56:17.09] So I donā€™t know the particular numbers of these other tools out thereā€¦ I think that everyone has to be growing incredibly quickly, just because of the level of interest, but itā€™s still very early and most devs are up for grabs. I think the thing thatā€™s going to work is the code AI that every dev can use and instantly see working. And what are they gonna look at? Theyā€™re gonna say ā€œDid it write good code for me? Is that answer to that code question correct or not? Did it cite its sources? Does it write a good test for me?ā€ And itā€™s not going to be based on hype.

So we just see a lot of ā€“ itā€™s kind of like eating your vegetables work. Thatā€™s what weā€™re doing. Sometimes itā€™s tempting. When I see these other companies come out with these super-hyped up promises that - you know, ultimately, I think we all try their products and it doesnā€™t actually work. We do not want to be that kind of company, even though that could probably juice some installs, or something like that. We want to be the most trusted, the most rigorous. And if that means that we donā€™t come up in your Google Search autocomplete - well, I hope that we sell that by the time Cody is GA in Decemberā€¦ But so be it, because our customers are loving it, our users are loving it, and weā€™re just so laser-focused on this accuracy metric.

And by the way, that accuracy metric - we only can do that because of the context that we bring in. We look at, when weā€™re trying to complete a function, where else is it called across your entire codebase? Thatā€™s what a human would look at to complete it. Thatā€™s what the AI should be looking at. Weā€™re the only one that does that. We look at all kinds of other context sources. And itā€™s taken a lot of discipline, because there is a lot of hype, and thereā€™s a lot of excitement, and itā€™s tempting to do all this other stuffā€¦ But Iā€™m happy that weā€™re staying really disciplined, really focused there.

Yeah, the advantage I think youā€™re alluding to directly is that Sourcegraph has the understanding of the codebases that it has already available to it. That might require some understanding of how Sourcegraph actually works, but I think to be quick about it, that you sort of ingest one or many repositories, and Cody operates across those one or many in an enterprise. You mentioned a couple different companies; pick one of those and apply it there. Whereas, famously and infamously, GitHub - not X Copilot - was trained primarily on code available out there in the worldā€¦ Which is not your repository; itā€™s sort of everybody elseā€™s. So you sort of inherit, to some degree, the possibility of error as a result of bad code elsewhere, not code here, so to speak.

I think Tabnine offered similar, where they would train an artificial intelligence code tool that was based upon your own codeā€™s understanding, although Iā€™m not super-deep and familiar with exactly how they work. We had their CEO on the podcast, I want to say about two years ago, again. So weā€™re probably due for a catch-up there, to some degree. But I think itā€™s worth talking through the differences, because I think thereā€™s an obvious advantage with Sourcegraph when you have that understanding. And not only do you have understanding; like you said, youā€™ve done your reps. Youā€™ve been eating your vegetables for basically a decade, you know what Iā€™m saying? So youā€™ve kind of earned the efficiencies that youā€™ve built into the codebase and into the platform to get to this understanding for one, and then actually have an LLM that can produce a result thatā€™s accurate is step two. You already had the understanding before, and now youā€™re layering on this advantage. I think itā€™s pretty obvious.

Is a lot of your focus, it sounds like, is on vertical in terms of current customer base, versus horizontal across the playing field? Like you probably are going after new customers and maybe attracting new customers, but it sounds like youā€™re trying to focus your reps on the customers you already have, and embedding further within. Is that pretty accurate? Whatā€™s your approach to rolling out Cody, and how do you do that?

[01:00:07.02] Hereā€™s my order of operations when I every three hours look at our charts. First, I look at what is our accuracy.

Every three hours?

Oh, yeah. Yeah. I love doing this.

Do you have an alarm or something? Or is this an natural built-in habit youā€™ve got?

I think a natural built-in habit. So first, I look at what is our accuracy, our completion acceptance rate, and how is that trending, broken up by language, by editor, and so on. Itā€™s the first thing I look at. Next, I look at latency. Next, I look at customer adoption, and next I look at DAU, and retentionā€¦ And that gets all this broad adoption. And everything is growing. Everything is growing in a way that makes me really happy, but the first and most important thing is a really high-quality product. That is what users want. Thatā€™s what leads to this growth in users. But thatā€™s also what helps us make Cody better and better. Thatā€™s what helps us make Cody so that it can do more of the grunt work, or whatever parts of the job that developers donā€™t like. If we were just to be at every single event, and we had all this content, we could probably get our users higher, faster than making the product better. But thatā€™s not a long-term way to win.

And so instead, weā€™re seeing ā€œHow do we use our code graph more?ā€ How do we get better, entire codebase references? How do we look at syntactical clues? How do we look at the usersā€™ behavior? How do we look at - of course, what theyā€™ve been doing in their editor recently, like Copilot does, but how do we take in other signals from what theyā€™re doing in their editor? How do we use our code search? How do we use conceptual search and fuzzy search to bring in, where this concept of say GitLab authentication exists elsewhere in their code, even if itā€™s in a different language? How do we bring in really good ways of telling Cody what goes into a really good test? And if you just asked ChatGPT ā€œHey, write a test for this functionā€, itā€™s gonna write some code, but itā€™s not going to use your languages, your frameworks, your conventions, your test setup and teardown functions. But we have taught Cody how to do that. Thatā€™s all that stuff that weā€™re doing under the hood, but we donā€™t need developers to know about that. What they need to see is just this works. The code that writes is really good. And by the way, with the things I mentioned - those are six or so context sources that if you compare to other code AI tools, theyā€™re maybe doing one or two. But weā€™re not stopping there, because - you know, take a simple example; if you want the code AI to fix a bug in your code - well, itā€™s probably gotta go look at your logs. Your logs are probably in Splunk, or Datadog, or some ELK Stack somewhereā€¦ And so weā€™re starting to teach Cody how to go to these other tools. Your design docs are in Google Docs. Youā€™ve probably got tickets in Confluence that have your bugs; thatā€™s important for a test case. And you also have your product requirements in JIRA. JIRA, Confluenceā€¦ You want to look at the seven static analysis tools that your company uses to check code, and thatā€™s what should be runā€¦ So all these other tools, Cody will integrate with all of them. And they come from so many different vendors, companies that have in-house toolsā€¦ And that ultimately is the kind of context that any human would need if they were writing code. And again, the AI needs that context, too.

We are universal. Weā€™ve always been universal for code search, no matter whether your code is in hundreds of thousands of repos, or across GitHub, GitLab, Bitbucket and so onā€¦ And now itā€™s ā€“ well, what if the information about your code, the context of your code is scattered across all these different dev tools? A good AI is going to need to tap all of those, and thatā€™s what weā€™re building. And then you look at other tools from vendors that are ā€“ you know, maybe the future of their code AI will tap their version of logging, their internal wikiā€¦ But very few companies use a single vendors suite for everything and are totally locked in. So that universal code AI is critical. And thatā€™s how weā€™re already ahead today with context, that leads to better accuracyā€¦ But thatā€™s how we stay ahead. And developers have come to look at us as this universal, this independent company that integrates with all the tools they use and love. So I think thatā€™s gonna be a really long-term, enduring advantage, and weā€™re putting a ton of investment behind this. Weā€™re putting the entire company behind this. So it takes a lot of work to integrate with dozens and dozens of tools like this.

[01:04:27.25] For sure. What does it take to sell this? Do you have a sales organization? Who does that sales organization report to? Does that report to both you and Beyang collectively, or you because youā€™re CEO, or is there somebody beneath you they report to and that person reports to you? And whenever you go to this metrics every three hours and you see letā€™s say a customer that should be growing at a rate of x, but theyā€™re not, do you say ā€œHey, so and so, go and reach out to so and make something happen?ā€ or get a demo to them, because weā€™re really changing the world here and they need to be using this world-changing thing, because we made it and theyā€™re using us, and all the good things? How does action take place? How does execution take place when it comes to really winning the customer, getting the deal signed? Are there custom contracts? I see a way where I can sign up for free, and then also contact. So it sounds like itā€™s not a PLG. Kind of PLG-esque. You can start with a free tier, butā€¦ Are most of these deals, are they homegrown? Is there a sales team? Walk me through the actual sales process.

Yeah, everyone at Sourcegraph works with customers in some way or anotherā€¦ And weā€™ve got an awesome sales team, we also have an awesome technical success team, that goes and works with users that are our customers. We see a few things come up. When I look at a company, sometimes Iā€™m like ā€œMan, if every one of your developers had Cody tomorrow, they would be able to move so much faster.ā€ And yet, you know, I canā€™t just think that and expect that to happenā€¦ So one of the reasons that we see companies slower to adopt code AI than perhaps they themselves would even like to is theyā€™re not sure how to evaluate it. Theyā€™re not sure how to test it. Theyā€™ve got security and legal questions, but sometimes they want to see what is the improvement to developer productivity. Sometimes they want to run a much more complex evaluation process for code AI, than they would for any other tool out there, just because thereā€™s so much scrutiny, and nobody wants to mess this up. So what we advocate for, what GitHub advocates for is thereā€™s so much latent value here. Look at accuracy, look at that completion acceptance rate, and that is the quality metric. And then thereā€™s a lot of public research out there showing that if you can show a favorable completion accuracy rate inside of a company, then that will lead to productivity gains, rather than having to do a six-month-long study inside of each company. So thatā€™s one thing that helps.

Another thing is sometimes companies say ā€œWe want to pick just one code AI tool.ā€ And I think thatā€™s not the right choice. That would be like a company picking one database in the year 1980, and expecting that to stick forever. This space is changing so quickly, and different code AI tools have different capabilities. So we always push for ā€œGet started with the people that are ready to use it todayā€, rather than trying to make some big top-down decision for the entire organization.

Okay, so two co-founders deeply involved day to dayā€¦ One thing I really appreciate - and I often reference Sourcegraph, and I suppose you indirectly by mentioning Sourcegraphā€¦ Sometimes you by name, you and Beyang by name, but sometimes just ā€œthe co-foundersā€. So I lump you into a moniker of the co-founders. And I will often tell folks like ā€œHey, if youā€™re a CEOā€“ā€ I often talk to a lot of different CEOs, or foundersā€¦ And they really struggle to speak about what they do. They literally cannot explain what they do in a coherent way very well. It happens frequently, and those things do not hit the air, letā€™s just say. Right? Weā€™re a podcast primarily.

[01:08:13.11] Or I have bad conversations about possible partnerships and possible working with them, and itā€™s a red flag for me. If Iā€™m talking to a CEO in particular, that has a challenge describing what they do, Iā€™m just like ā€œDo we really want to work with them?ā€ But you can speak very well. Congratulationsā€¦ You and Beyang are like out there as almost mouthpieces and personas in the world, not just to sell Sourcegraph, but you really do care. I think you both do a great job of being the kind of folks who co-found and lead, that can speak well about what you do, why youā€™re going the direction youā€™re going, and thatā€™s just not always the case. How do you all do that? How do you two stay in sync? Has this been a strategy, or did you just do this naturally? What do you think made you all get to this position to be two co-founders who can speak well about what you do?

We have learned a lot since we started Sourcegraph on this in particular. And even when describing Sourcegraph, we say ā€œCode search. And now we also do code AI.ā€ And I think some people are definitely relieved when they ask ā€œHey, what does Sourcegraph do that itā€™s four words?ā€ Because I think thereā€™s a lot of companies where they do struggle to describe what they do in four words. And yet, we were not always at this point. Iā€™m coming here from a position where we have a lot of customers. Weā€™ve validated that we had product-market fit, that a ton of people use those products, and that I can say that. But before we had that, there was a lot of pressure on me from other people and for me internally to make us sound like more than code search. Because code search feels like a small thingā€¦ Which, seems silly in hindsight. Does Google think that search is a small thing? No. But there was a lot of pressure to say ā€œWeā€™re a code platform-platform, a developer experience platformā€, or that we revolutionize and leverage, and all this stuff. Thereā€™s a lot of pressure ā€“

ā€¦but nothing beats the confidence of product-market fit, of having a lot of customers and users just say what you actually do. And one way we started to get that even before we had all that external validation was Beyang and I use our product all the time. We code all the time. I donā€™t code production features as much, but we fundamentally know that code search is a thing that is valuable. That Cody, that code AI is the thing thatā€™s valuable. And we felt that two weeks after we started the company. We were building Sourcegraph and we were using Sourcegraph, and for me, it saved me so much time, because it helped me find that someone had already written a bunch of the code that I was about to write for the next three weeks. So it saved me time in the first two weeks. And from then, itā€™s clicked. So I think as a founder, use your product, and if youā€™re not using your product, make it something ā€“ make it so good that you would use it all the time. And then iterate until you find the thing that starts to work, and then be really confident there. But itā€™s tough until youā€™ve gotten those things.

Thatā€™s cool, man. It does take a journey to get to the right place. I will agree with that. And just know that out there you have an Adam Stacoviak telling folks the way to do it is Sourcegraph.

You guys are great co-founders, you guys seem to work great togetherā€¦ I see you on Twitter having great conversationsā€¦ Youā€™re not fighting with people, youā€™re not saying that youā€™re the best, youā€™re just sort of out there, kind of iterating on yourselves and the product, and just showing up. And I think thatā€™s a great example of how to do it in this world where all too often weā€™re just marketed to and sold to. And I donā€™t think that you all approach it from a ā€œWe must sell more, we must market more.ā€ Thatā€™s kind of why I asked you the sales question, like how do you grow? And you didnā€™t fully answer, and thatā€™s coolā€¦ You kind of gave me directional, you didnā€™t give me particulars. But thatā€™s cool.

[01:12:04.16] Yeah. Well, lookā€¦ If you just take the customers that we have today, we could become one of the probably at the highest adoption code AI tool, the highest value code AI tool just by getting to all the devs in our existing customers; not even adding another customer. And that just seems to me to be a much better way to grow, through a truly great product, that everyone can use, that everyone can adopt, thatā€™s so low frictionā€¦ Rather than something thatā€™s not scalable, than getting billboards, or buying adsā€¦ Thatā€™s all part of the portfolio approach that youā€™ve got to take, but ultimately, the only thing thatā€™s gonna get really big is a product that not only do people love so much they spread, but where they ā€“ it gets better when they use it with other people. Thatā€™s the only thing that matters. Anything else, youā€™re gonna get to a local maximum.

Very cool. Okay, so weā€™re getting to the end of the showā€¦ I guess, whatā€™s next? Whatā€™s next for Cody? Give us a glimpse into whatā€™s next for Cody. What are you guys working on?

For us itā€™s really two things. Itā€™s keep increasing that accuracy. Just keep eating our vegetables there. Maybe thatā€™s not the stuff that gets hype, but thatā€™s the stuff that users love. And then longer term, over next year, itā€™s about how do we teach Cody about your logs, about your design docs, about your tickets, about performance characteristics, about where itā€™s deployed? All these other kinds of context that any human developer would need to know. And ultimately, thatā€™s what any code AI would need to know if itā€™s going to fix a bug, if itā€™s going to design a new feature, if itā€™s going to write code in a way that fits your architecture. And you donā€™t see any code AI tools even thinking about that right now. But thatā€™s something where I think we have a big advantage, because weā€™re universal. All those pieces of information live in tools from so many different vendors, and we can integrate with all of themā€¦ Whereas any other code AI is going to integrate with the locked-in suiteā€¦ And youā€™re probably not using whatever vendorā€™s tools for a wiki, for example, and their logs, and all that. So thatā€™s a huge advantage. And thatā€™s how we see code AI getting smarter and smarter. Because itā€™s going to hit a wall, unless it can tap that information. And you already see other code AI tools hitting a wall; not getting better much over the last one or two years, because they cannot tap that context. Itā€™s all about context, context, context, whether youā€™re feeding that into the model at inference time, whether youā€™re fine-tuning on thatā€¦ Itā€™s all about the context. So thatā€™s what weā€™re gonna be completely focused on, and we know the context is valuable if it increases that accuracy. And what a beautiful situation with this incredibly complex, wide open space, that you actually can boil it down basically to a single metric.

So thatā€™s our roadmap - just keep on making it better, and smarter, and in ways that means developers are going to say ā€œWow, it wrote the right code, and I didnā€™t think that it could write an entire file. I didnā€™t think it could write many files. I didnā€™t think it could take that high-level task and complete it.ā€ Thatā€™s what weā€™re gonna be working toward.

Well said. Very different conversation this time around than last time around, and I appreciate that. I appreciate the commitment to iteration, the commitment to building upon the platform you believed in early on to get to this place, and - yeah, thank you so much for coming on, Quinn. Itā€™s been awesome.

Yeah, thank you.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. šŸ’š


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK