3

What you need to know about Spotify's search function - Jonny Brooks-Bartlett -...

 1 year ago
source link: https://www.mindtheproduct.com/what-you-need-to-know-about-spotifys-search-function-jonny-brooks-bartlett-on-the-product-experience/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Featured Links: Follow Jonny on LinkedIn and Twitter | Jonny on Medium | Seth Stephens-Davidowitz’s book ‘Everybody Lies: Big Data, New data, and What the Internet Can Tell Us About Who We Really Are’ | Reinforcement Learning at GeeksforGeeks

Episode transcript

Randy Silver: 0:00Hey, Lily, did you know that I have climbed the highest mountains? I have run through the fields only to be with you. What? I have run I have crawled I have scaled these city walls, only to be with you.

Lily Smith: 0:16Okay, you’re beginning to sound a little creepy right now.

Randy Silver: 0:19Yeah, probably. But I still haven’t found what I’m looking for.

Lily Smith: 0:25Oh, my days really have we really got to the point where you quote you to lyrics that we and I have to figure out what relevance this has to this week’s episode.

Randy Silver: 0:36You have a better idea?

Lily Smith: 0:37Yes, I do. No more searching. I’m just going to tell everyone that we’re talking to Spotify, senior machine learning engineer, Jonny Brooks-Bartlett about the things every product person needs to know about search there. See how easy that was?

Randy Silver: 0:53Yeah, surely that’s way more efficient. Next time, you’re in charge of the intro. Okay, enough silliness. Let’s get right into it.

Lily Smith: 1:06The product experience is brought to you by mine the product. Every week on the podcast, we talk to the best product people from around the globe. Is it mind product.com to catch up on past episodes, and discover more,

Randy Silver: 1:17browse for free, or become a mind that product member to unlock premium content, discounts to our conferences around the world, and training opportunities. Mind the product also offers free product tank meetups in more than 200 cities.

Lily Smith: 1:35Hi, Jonny, great to have you on the podcast today. How are you doing?

Jonny Brooks-Bartlett: 1:39I’m doing very well. Thank you. How are you doing?

Lily Smith: 1:41Yeah, very good. Thank you all the better for being here chatting to you today.

Jonny Brooks-Bartlett: 1:47You know how to make people feel good.

Lily Smith: 1:52So we’re gonna be chatting all about search today. But before we get stuck in, would you please give us a real quick intro into who you are, what you do in product and how you got here?

Jonny Brooks-Bartlett: 2:05Yeah, sure. Thanks. So yeah, my name is Jonny Brooks-Bartlett. I’m a senior machine learning engineer at Spotify. And in particular, I work in the search team. So the product that I work on is Spotify search. And at a very high level, my work is to build algorithms to give you the right results, or the most relevant results when you when you make a query. So my journey to hear is a bit like a roller coaster, or it hasn’t been linear. Let’s put it that way. So you know, I started out and I won’t dive too much into it so much. But like at university, I did a maths degree, I went on and did a PhD in biochemistry. And then during that time, I was like writing a lot of software to analyse experiments. And I started to realise that the thing I liked most was the coding the algorithms, as opposed to the actual biology itself. So that’s when I found out about this field called Data Science, which is everywhere, it was massive, it was hyped up quite a lot. So this was around 2015. And I discovered this field. And you know, it’s all about big data and all these smart machine learning algorithms. So I decided to do a boot camp, I did a six week boot camp, and then managed to get a job in a media publishing company called news UK. So I worked on, I work with editors, and journalists from The Times newspaper and the Sun newspaper, trying to improve the, I guess, the content creation process. So can I support editors in like, what are the best things to put on social media at the right time? Can I get the right headlines and things like that. So I worked in that, then I moved to a company called delivery, and they do food delivery. And in that company, that’s when I did lots of different things in data science, I was doing a bit of fraud detection, I was doing a bit of recommendations and ranking, I was doing classification of menus. So there’s a lots of different things. And it was great, really loved it. And then in 2020 in the summer, I decided to move into not data science, but into engineering field as machine learning engineer at Spotify. So this was more of a move to say I actually want to also work not just on the algorithms, but also on the systems that serve these algorithms, sort of these users. So that was that was a real switch for me. And it’s been it’s been a real learning curve along the way, but I’ve really enjoyed it and that’s that’s how I’ve ended up here at Spotify and In the search thing,

Lily Smith: 5:01yeah, amazing. And search seems like the kind of thing where it’s a bit of a, an art and a science. But why do we still even need people to design search? Like functions? Why Why hasn’t this problem been solved? Like, search has been around since the the day zero on the internet? Like, why do we need humans to figure out like how search should work?

Jonny Brooks-Bartlett: 5:31Yeah, it’s a good question. Again, it’s, it’s one in which I guess it can be seen, at a surface quite an easy problem. If a user decides to type into a search bar exactly what they want. Is it that hard to give it to him. And it turns out that in some cases, it’s not that hard. And in some cases, it is very hard. So the easy cases, when a user comes to a platform, knows exactly what they want, knows the, let’s say the title or a description. And they can type exactly the same thing as the title or the description or any other metadata that we keep. So if you type exactly that, then it’s easiest for the search system for the search system to know what to retrieve, and what to get you. It becomes very difficult in the cases when the user doesn’t know exactly what they want. So it might be so at Spotify, it might be a case of if someone knows exactly they want to listen to particular song by particular artists, they might say, hey, I want to listen to work by Rihanna. See, now I’m showing my age, because I know that’s not in the charts anymore. It came to my head first. But yeah, if they want to listen to that,

Lily Smith: 6:50oh, just glad I knew who Rihanna was.

Jonny Brooks-Bartlett: 6:54Oh, I’ve got to a point where like, we get presentations done at work. And they’re like, oh, yeah, this is like the biggest artists most streamed artists on the platform. And I’m like, Who? That person is bad bunny, two years in a row. Now I think mainstream dies. And I have no idea who he wants. No. And yeah, go schooling about BTS, a huge Korean KPop band? No idea in time. Oh, my God, how do you know?

Randy Silver: 7:31We are going to have to get a link to your playlist to share as part of this. Gonna be old school, it’ll be great.

Jonny Brooks-Bartlett: 7:41I’d love to go old school. But yeah, so sorry, what are the same. So if someone knows exactly what they want, they can come in and type exactly that and they’ll get it. However, some people might just say, I just want chill, mellow music. And then you’re like, Okay, what does chill and mellow mean to you? So are you looking for like a playlist of stuff, or you’re looking for a particular trap that’s like they’re an album, or or now that we’re actually branched out into more content types. We have podcasts on the platform on Spotify, and also recently released audiobooks at the moment currently just in the US. But it will start to be available everywhere else. So people actually are searching differently. You don’t search for necessarily the name of an author all the time. If you want a particular book, what is it now that you search for? If you want a podcast? Is it do you typically type or query the rough topic that it’s about? And that’s just for Spotify? I mean, it gets if you get into like retail, could it be that someone wants a white dress? Now, all of a sudden, you have a text query, but you need to pull everything about maybe images and stuff. So you now need to combine text with images, perhaps video, depending on your platform. So actually is very complex to do search.

Randy Silver: 9:06So let’s let’s step back a little bit and talk about how a search query or a processing of a search query actually works. Or what is it that I think I’ve seen you talk about and you said, there’s three pieces to how search actually works. Can you go through the police?

Jonny Brooks-Bartlett: 9:22Yeah, sure thing. So this is when when someone does a search? Yeah, we can we can break down what happens into three main steps. And I should say, not every company does all of these three steps. And not every company does the same. So you’ll have the quality of a search engine from different places and different companies will be different because of how they perform the steps. But at least the broad way in which most places will do search. First step is what we call query processing. Second step is candidate retrieval and if Third step is ranking. So I’ll go through each one of those. And in turn, and hopefully, like make, it makes some sense. So first off, you do query processing. So someone types of query, we have to get some sort of understanding or do some processing on that query. So it might be, for example, that that query is incorrectly spelled. And in that case, we might want to do some query, some spell correction. So that’s some processing that might go on. Typical processing, that happens almost all the time, is what we call normalisation of the text. But this could be things like making sure all of the letters are lowercase. Therefore, the search system doesn’t mismatch, an uppercase word with a lowercase word in the data store. When in search, when we have a store of all of the catalogue, we call it an index. So I’m going to try and use Datastore, because it might be more to people. But if I slip up and say, index, that’s what I mean. So yeah, so that’s what we might do, we’re going to process this query. So it’s easier for the downstream system. So the system that’s actually going to retrieve the candidates to be able to match stuff on the query. So if I correct the spelling, if I lowercase everything, I normalise things, then it was much easier to beat them. So that’s the query processing step, then you have what we call candidate retrieval. Some companies won’t need this, because the data that they have is so small, that they don’t need to worry about retrieving certain candidates. So let me explain what this is. at Spotify in particular, we have millions and millions of items in our catalogue, different podcasts, episodes, artists shows all these things. Now, it isn’t feasible to return all of these millions of things, every time someone does a search, and then go on to rank it, what we have to do is pull out the most relevant group. So we typically put out put out a few 100. And now so we’ve got to somehow find from that query that’s been processed, how can we get the first, say 234 100 items that are most relevant to the query. And we’ve got to do this in a short amount of time. And then went to we’ve got that few 100. So that second step, that candidate retrieval, pulling out a few 100, we then need to order that 200, we need to rank it in such a way that you’re presented with the most relevant things at the top. And that’s the ranking step. And typically, that ranking step takes in the most information about the context. So it’ll take in a lot of like, user information that’s like that personalization side and all of the metadata, whereas the candidate retrieval step will be much more lightweight, it might not have as much information as the ranking step. But all this is done very, very quickly. So if you use for example, Spotify search, you’ll notice that after you type every character, you get results. And so on each character step, all these three processes happen. And we have very strict latency or time constraints to get those results to you. And that’s largely how things might differ, you know, is if you do a Google search, it doesn’t you have to press search and search for after typing your entire query. And that’s largely I suspect, it’s because there’s billions of things in the internet, you could mean so many different things. Trying to return for each character probably isn’t going to be as useful.

Lily Smith: 13:40And how do you measure how successful you’re being with such and? And then also, how do you decide what bit you need to optimise?

Jonny Brooks-Bartlett: 13:50Yeah, so this is largely where we start thinking about what is it that the business is trying to achieve? And how can we help those business goals with search? And this is when we think about what we’re trying to optimise. So you know, we might have, we have podcasts on a platform, and that started around 2018. So around that time, we would have almost certainly said, we’ve got podcasts on a platform we need to improve how people search for podcasts. Because the type the ways in which people search for podcasts is quite different to how they search for music. In which case you know we need to invest some time to learn about this and this is specifically what my team has done in RAM early last year we first started really trying to delve into even like improving the podcast search even more. So trying to understand the semantics of this.

Lily Smith: 14:47So you didn’t just like recommend the product experience every single time someone tried to search for something. Next station

Jonny Brooks-Bartlett: 14:59That’s what I’ll do next time

Randy Silver: 15:02Obviously what they want to find?

Jonny Brooks-Bartlett: 15:04I’m gonna take it to the meeting tomorrow.

Lily Smith: 15:06Yeah, you’ll get a pay rise and promotion. Anyway, Sorry, Sorry, I interrupted.

Jonny Brooks-Bartlett: 15:15So this is the sort of way this is how we’ll decide what we’re going to work on. We might also, for example, you know, Spotify is in like, loads and loads of countries. And so our localization, you know, we can we work a lot in English and for English speaking markets, but in non English speaking markets, maybe the search experience isn’t so good. Maybe our spell correction. Now rhythms are actually optimised so much for English that it doesn’t work so well in non English languages. So we’ll decide what sort of problems that go after often looking for our data, trying to see trends like do we have less success in, in this particular country or regional, or this type of content? Are we like really bad at recommended? So that’s how we might come up with what we’re going to work on, then we’re going to set objectives or metrics that we’re going to look at which we say, if I’ve improved search in this country, well, then I should get more successful searches in that country. Or if I’ve improved podcast search, then maybe I’ll have an improvement in my podcast success metrics. And so yeah, that’s how we largely try and come up with, with what we’re going to work on.

Randy Silver: 16:33What’s a good way for, for the product managers to come work with your team. And we usually come in and say, we’ve got an objective for what we want to do on a sprint, we’ll sit down work with our engineers on breaking things down into stories, potentially estimating them and things like that. But when you work with ml, when you work with some of these things, it’s not as straightforward as all that it’s not always so easy to predict how hard something’s going to be, how long it’s going to take. So what’s what does it actually look like? What’s a good experience of working with the product team?

Jonny Brooks-Bartlett: 17:05Yeah, so this is actually like one of the things that I’ve found different, a lot in different companies like, how do we actually work together with or just work on a product to achieve, let’s say, a business objective or things like that. So often, what happens at Spotify and I’ve found it works really well is, the product manager will often try to define the problem. Don’t try to find a problem space. So I’ve mentioned that, you know, we might not do so well in other countries in other markets in search, and the product manager might have a look at the business goal. Know that say there’s a problem in a particular market? Does solving the problem for the market achieve or work towards a business goal? If yes, great. If not, then perhaps we need to look for a different problem. But they’ll try and define some of the problems. But the product managers won’t, like prescribe a solution. And I think that’s, that works really well, because then we can take a problem. And we take about two weeks, we take a sprint usually as engineers, to take all of the goals that we want in the business and in search, and then try and come up with, and then we get the problems. And we come up with what we call request for comments, or RFCs. These are just documents where we might write down what we think a solution to them is, share it with product managers and engineers, they comment on it, we try and refine the idea of what it’s going to what it’s going to look like. And then that will work as our plan for a piece of work, let’s say in the next quarter, or the next sprint that we’re going to do to hopefully tackle the problem that we have, and therefore work towards the objectives that we have in our in our department or in our company.

Lily Smith: 19:09That makes loads of sense. But what about QA? Like how do you how do you then QA the changes that you’re making? Or, you know, test what your what your changes are doing?

Jonny Brooks-Bartlett: 19:21Yeah, so this is why there’s there’s like QA that we do within the team to make sure things work. And then there’s like, essentially the the what we call an online versus offline testing, as I’ll put it like that. So offline testing is when we test things. They’re not in front of users at this point. We’re testing them amongst employees offline. So it’s not, we’re not worried about too much that goes on. So we will test on ourselves. Sometimes we have an employee rollout. So there might be a new idea that we have that we think we want to get feedback on because we’re not sure about the users variants. So we’re just going to roll this out to employees, let the company know, get some feedback first, on what works well, what doesn’t make some of those changes. And then we might start doing the online testing, or what people mostly call AV testing. So at this point, we run these AV tests where we will take 50% of users that will have like the current experience the status quo, and then 50% of users that will have this new feature we’ve worked on. And then we’ll run a test where basically, those two different groups will just that some of them won’t even know that they most of the time, they won’t know that there’s a test running. Because the changes will be so slight, so different in many cases. So yeah, so we’ve run their tests like that. And we’ll see, does the improvement in search change across those two groups? So do the people who get their new feature? Are they getting to see a result that they might stream? Quick, more quickly? Do they end up with more successful, more things that they actually stream or add to a playlist or things like that? And so those are the sorts of measures that we will look at to say, is this feature actually better or not? That’s

Randy Silver: 21:23great from a quantitative standpoint, but before you do that, do you look at customer research? Do you look at qualitative measures as well to try and understand how to implement the stuff?

Jonny Brooks-Bartlett: 21:34Yeah, so there’ll be some products or some some features, which we will just run out and do an A B test, largely because they they won’t necessarily be known. We won’t know necessarily, if it’s actually going to have an improvement. But we don’t think that the user will experience things too differently. So this might be something with like, we’ve got a new rancour, and we’re reordering the results. Most of the time, it’s so difficult for even the person who’s actually written the ranking algorithm to tell. So yeah, those sorts of things, we probably won’t put in front of, say, a user, we won’t do any user research. But we do do some user research, particularly when either we’ve got a big say UI change, a change in how things will look, or if we’re going to completely the unknown. So if we want to say, launch in a new market, and we want to know what the problems are, or, or what friction points are, that’s when we’ll start doing some of these research. And they’ll come back, these researchers will come back with usually a presentation, actually, sometimes, not even sometimes, every time they do the user research, we’re allowed to sit in on the interviews, as well. So that’s also very insightful. I love sitting in on these research, panels, because I learned things that really challenge what I thought about how people interacted, or people knew about a product, it’s actually absolutely fantastic. But yeah, so we can sit in on the interviews, the user of user researchers will typically try to summarise those key points, and try to make clear what problems there are, and then we can figure out how to solve them.

Lily Smith: 23:28And that’s just sort of made me think about, like, you know, the biases that you might as you are writing the algorithms to return results for, for the customers of Spotify. How do you avoid, like putting your own kind of bias into those algorithms? Or your you know, I guess even like, having if there’s personalization, how do you kind of ensure that it doesn’t just mean that that person just sees the same content over and over again? Well, maybe that’s a good thing. I don’t know. Maybe that’s what they want?

Jonny Brooks-Bartlett: 24:03Well, yeah, that’s. So bias is like a real big problem, actually. I mean, you can have a whole we can have a whole podcast episode on bias in algorithms. The truth is, I have not worked anywhere and seen an algorithm that hasn’t been biassed to an extent where I felt completely, like, actually where this is, it’s this a sufficient amount of bias. It’s not too bad. To be honest, a lot of it is is inherent in the algorithms. And it just so happens that the type of bias may or may not be malicious, in some sense. So often when people talk about biassed algorithms, it’s typically on malicious algorithms and the ones that people see the most so it could be like facial recognition and things like that. Most of the bias we see typically in recommendation systems or search systems on things that are very popular, and just get clicked on. So, you know, you’re more likely to see an artist come up, like Taylor Swift or BTS, or bad bunny in a search, then I clearly I don’t see enough of them. But you’re more likely to see artists like that, then you will do like an artist who, you know, has, like risen locally has like a few 1000 streams maybe. And that’s just because an algorithm that is designed to improve, like the amount that people like, stream or like have a successful search, they’re designed to get like, the things that’s most likely be clicked. And the things that are most likely to be clicked are things that are popular. So we search, there’s, it’s not as bad as in other areas. And this is because with the query, the user can state their intent. But typically, especially in cases, and I’ve also worked on algorithms where it’s not been sex related, but I’ve still been recommending certain things like it delivering, like, it can be to the point where, like, you just get too much popularity. So an example I had, I need a reason that I felt like quite strongly about the fact we need to tackle bias. So I’m vegan. And like, the most popular stuff on any food delivery website is not vegan. So when I saw ranking algorithms that would constantly give me non vegan restaurants, I’m like, we’ve got way too much bias in it, because it clearly hasn’t learned me personally. And so yeah, these and I was also helping to work on the algorithm, you know, and when I can’t solve it myself, like it’s, it can be very difficult. So there’s a lot of things to tweak and change. And yeah, the bias will exist almost everywhere all the time.

Randy Silver: 26:53I knew a guy years and years ago, who owned some radio stations, and he was out to dinner with him. And he told me, I can prove that you only like 400 songs, because you know, some of the most popular radio stations have a really restricted playlist. And as you know, I used to be a music journalist, I guarantee you that I like more than 400 songs, and I like to learn new things. So but we all think we’re special and unique and wonderful. How predictable. And how unique. Are we really when the when you look at this, are we less unique than we think? Or is everyone really need something really tweaked to them?

Jonny Brooks-Bartlett: 27:31Yeah, no, we are not unique. We love to think we are we really aren’t. It’s so funny, I’ve, I’ve actually had to read a lot of Fred a lot of books. I also is like spent a lot of time. So I said I worked previously at the so it was a news UK was the media publishing company I was at and I worked in the Times newspaper in the sun. And yeah, one of the things we were trying to do was look at, like what people wanted to read. So when we were like recommending, like, what should what should the homepage look like? So on any given day, we’ve got new news articles. And I remember talking to one of the editors who said to me, like, we know what everyone wants. It’s nudity, and celebrity deaths. Like that’s what comes up. And it doesn’t matter how many surveys you do. So they said it doesn’t matter how many surveys they do with people always saying they want feel good stories, they want this anyway. Now the data tells us that you want naked people in your, in your, on your feet. And so yeah, like some of these things are even when people think they want certain things, it doesn’t really match what their behaviours are. And there’s a really good book about it, I read called everybody lives, what the internet can tell us about who we really are. And it’s written, I remember, the first name is Seth, but I can’t remember a secondary, but it’s a fantastic book, which is so he’s a social largest social scientist by background worked at Google and now for the New York Times. And he largely looks at like big data searches to try and understand what people are. So it goes from, say Google searches all the way to Pornhub searches, and tries to understand the trends that people are seeing. And I won’t spoil it or say anything obscene. But yeah, people are quite predictable. And they can search for some really odd things.

Lily Smith: 29:36I suppose that’s the kind of the thing as well with search or with engagement with people on the internet in general, is you’ve got the behaviour that they’re actually doing. And then there’s also the behaviour that you kind of want them to do so obviously, like in Spotify, you probably want them to like listen to more music. So I I suppose that for you, those two things are aligned. But yeah, it’s a kind of interesting. Well, there’s influence that you can have where you might want to, like, get people to explore new artists, because then if they have new content to consume, then they’re more likely to come back maybe? Or is there any other sort of things that you’re optimising for when you think of search?

Jonny Brooks-Bartlett: 30:23Yeah, so it can vary quite a bit. And in terms of what we optimise, if you ask me every quarter, there might be something new that we’re trying to optimise for. Often, if I were to say like, there’s an overarching thing we want, we would say if someone does a search, we want them to find something that they will engage with. Yeah, that’s the idea. So if they do a search, and they don’t engage with something, that means you’re probably not showing the right things. But what we want them to engage with can change. So you know, we may, as I said, like, when we got podcasts on the platform, like it was a bit of a push, we wanted to make sure that you know the value that we gave to not only just the users, but the creators of podcasts, like it, it, they wouldn’t want to put stuff on a platform if we weren’t actually getting people into podcasts. So it was something that we wanted to increase. And we can do that with. I’ve seen that in several places, if we show more. So one of the things that delivery, for example, we wanted to, like make sure that we showed restaurants that were more like independent chain for become more fair with what he said. And actually, we can increase audits by just showing those at the tops. If you open the app, you get the list of restaurants that open on your phone, if you put those restaurants higher people will purchase from those same thing people will basically engage with the stuff that you show them. And it’s weird, because I think to myself, like, No, I know what I want. You can show me what you want. But I’m going to engage with what I want. But actually, these companies, they’re sneaky. And I say these companies if I’m not part of part of the problem, or someone trying to do that, but yeah, I metrics tell us that we can show you stuff. And make sure that you change your behaviour slightly, depending on show you,

Lily Smith: 32:26it really reminds me of the film high fidelity, where or in the book, where the guy’s like, by, I’m going to sell five copies of this album now. And he just puts it on in the store. And then everyone’s like, Hey, man, what’s this music? Yeah, I want to buy a copy. It’s exactly that, isn’t it? So yeah, it’s 100%. So when I used to work in a search company, as well, and one of the things that we found quite interesting was, we would have like, a certain amount of results that were kind of true to the search that had been made. And then some that was slightly more personalised to their customer. So based on their own viewing, or or listening, but then we had a bit of a sort of, like, randomization, one is, like angle as well, do you do you do anything like that, because I find that almost helps with all of the not helps to, like remove bias, but it helps to add in a little bit of, you know, a little bit of spice a little bit of something different that you probably weren’t expecting, and also, you know, potentially, like, keep the algorithm fresh or something.

Jonny Brooks-Bartlett: 33:42Yeah. So this is this is a good question. This goes into, like, a part of machine learning that, I guess Spotify are, like investing quite heavily in. So specifically me in search, I haven’t done any of the randomization stuff. I’ve seen papers where people in Spotify have. So we have like, had like a proportion of the searches or users that like randomise, the feed, but actually, what people doing for in machine learning to try to keep the algorithm fresh is you say, okay, is this field called reinforcement learning. And this is where, what we do is you create a machine learning model that says, I want to optimise in get the most say clicks on on certain things, as an example, but what I’ll do is I’m going to assume that I can just put stuff maybe a bit random, some of its going to be ordered in a particular way. But I might just quickly slip in, like this particular piece of content, that and we’ll see if you engage with it. If you do great now I have a bit of a better idea about that. And if you don’t, then it says okay, that’s not to try and get another, so we call it explore exploit. So we want to get enough randomization to explore all of the potential candidates or the content that we have to see whether it’s any good. But then we also know enough that we want to exploit what we do know. We can order it in the way that we think is best for you. And there are algorithms that automatically decide how much to exploit to exploit and how to learn. So yeah, so that’s actually a newer way to do it. In fact, a lot of stuff that comes out of DeepMind. So AlphaGo, and all that stuff in Alpha zero, and alpha fold. Because I’m a protein guy. I love the folding. It’s all reinforcement learning. It’s very much explore, exploit, try and learn what the best moves are.

Lily Smith: 35:48So yeah, amazing. We have run out of time, sadly, but I could geek out on this for a lot longer. To thank you so much for joining us, Johnny.

Jonny Brooks-Bartlett: 36:00Oh, thank you very much. It’s been a pleasure.

Lily Smith: 36:12The product experience is the first and the best podcast from mine the product. Our hosts are me,

Randy Silver: 36:20Lily Smith, and me Randy silver.

Lily Smith: 36:23Louron Pratt is our producer and Luke Smith is our editor.

Randy Silver: 36:27Our theme music is from humbard baseband power. That’s P AU. Thanks to Arnie killer who curates both product tank and MTP engage in Hamburg and who also plays bass in the band for letting us use their music. You can connect with your local product community via product tank, regular free meetups in over 200 cities worldwide.

Lily Smith: 36:48If there’s not one near you, maybe you should think about starting one. To find out more go to mind the product.com forward slash product tank near you.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK