7

Samsara’s AI lead talks risks, use cases, and explains why LLMs aren’t a silver...

 1 year ago
source link: https://diginomica.com/samsaras-ai-lead-talks-risks-use-cases-and-explains-why-llms-arent-silver-bullet
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Samsara’s AI lead talks risks, use cases, and explains why LLMs aren’t a silver bullet

By Derek du Preez

July 5, 2023

Dyslexia mode



Ai tech for futurist automatic transportation logistics Cargo Shipping, business virtual graphic Global Internet line connect Chat with AI. hi technology in Ship port © GreenOak - Shutterstock

(© GreenOak - Shutterstock)

Although the sudden wave of hysteria around AI could have you believing that all aspects of the technology are new, the reality is that some companies have been using AI models and machine learning for some time to drive their offering. Samsara, a vendor that aims to digitize physical operations, has been using AI as part of its product portfolio for a number of years now, for example, to improve driver safety and provide optimized routes for fleet vehicles. 

There is of course more that the vendor plans to introduce, given the wide-ranging scope of challenges that the connected operations sector faces (and the huge array of data that is available or collection), but Samsara’s approach to AI is seemingly measured and driven by conversations with customers. Whilst some companies are rushing to introduce large language model (LLM) tools across their portfolio, Samsara is instead asking: what is the need? 

diginomica sits down with Evan Welbourne, Samsara’s AI lead, to discuss how the company is thinking through its approach to AI, specifically in terms of how it is able to cut through the noise, make use of what’s valuable, and deliver beneficial outcomes for customers. 

Welbourne’s team is currently made up of 20 people working on AI specifically, but there is a broader group of 40 to 50 people that work in infrastructure that also play into the AI development work. In addition, Samsara has a privacy and ethics board, which is comprised of leadership across the company, and then there is also a specific AI ethics subcommittee that feeds into the work on assessing the risks associated with AI. 

As noted previously on diginomica, Samsara is targeting an industry, or companies, that have historically been neglected by vendors that have sought to capitalize on digitizing work. Equally, we’ve noted how the quantity of data that is available for collection across these large scale physical operations provides an opportunity for companies to think differently about how they operate. AI will play a critical role in accelerating the advancements in this area, according to Welbourne. He says: 

I think AI is really the crucial enabler, especially in the connected operations space, where the data is really petabyte scale. You're either working with an incredibly massive data set, or an aggregation that is very complex, or you're trying to find a needle in the haystack. 

AI emerges as this really excellent tool for our customers to extract the value from that data, which is so complex. It's multimodal, as we say. It's not just video, it’s sensor data, it’s text data. So AI is certainly a key tool in that regard. It's sort of the microscope for finding specific events, like the safety events that we see from drivers. And then it's also the macroscope for extracting the trends of driving habits, or whatever else is going on.

Where do you start? 

As the companies Samsara is working with have often not worked with data in this way before, or are grappling with legacy technology environments that need upgrading, Samsara needs to be cautious in how it approaches the use of AI for its customers. Too much too soon is likely not going to be what these buyers need. Rather, a curated approach to targeting use cases that deliver a high impact is what’s required. Welbourne says: 

In terms of prioritization, what we try to do is work backwards from specific customer use cases. And there what you'll see is us identifying specific customer needs, iterating closely with our customers. We then try to find solutions to their problems, and sometimes AI is not the solution. But we have found that there are some good use cases for AI and that’s when we're looking at which of the AI solutions are going to be the best fit and which can apply in various customer use cases to get maximum results. 

Prioritization flows from that customer need in pretty much every case. And then it works into the types of AI infrastructure that we build and the tools that we use to improve our products and deliver them faster. 

Samsara is typically adapting large foundational models, or what it calls backbone models, which are more often than not open source. There are many standard models available. However, it is also building its own models, which are either new foundational models, or extensions to the open source models that are available. 

However, Welbourne is keen to highlight that the hype around AI in recent months has led to somewhat of a disconnect between science fiction and reality (at this moment in time). However, in his experience, he is finding that customers are largely being pragmatic about their approach. Welbourne says: 

I think there are some misconceptions around AI and what's possible. For people that have used some of the new technologies, like ChatGPT, there tends to be an assumption that AI has really achieved human-like capabilities across the board. 

People think you'll just be able to ask it something and it will answer your questions and satisfy your needs. For the most part, with our customers, we're not hearing that so strongly - yet. There isn't as much interest or demand for some of the generative AI technologies. There are questions about it. 

I think, to an extent, our role there is to educate and inform and then really try to position our use of technologies, like generative AI, or any of the other sort of new cutting edge technologies, in a way that really solves specific problems for the customers. Again, we're talking with them more about the problems than we're trying to solve, matching specific AI technologies to those problems.

Equally, whilst Samsara is working with LLMs and is thinking through how they can be applied to its customers’ needs, Welbourne is realistic about what they can achieve at this moment in time. In particular, he’s very aware that the types of data and how that data is being applied across Samsara’s install base today is oftentimes not relevant to what customers are trying to achieve. Welbourne is wary of LLMs behaving in a convincing way, rather than actually being helpful. He says: 

I think there are things that LLM 's are very good at and there's things that they're actually not so good at but are happy to pretend that they're able to do. So when it comes to a solution that's really useful for customers, it's a combination of using an LLM, but that has to be partnered with various capabilities around data processing, automatic validation, querying, putting together certain inferences across various products and data that we have. 

An LLM isn't capable of working directly with a lot of things. If we're talking about video data, location data, other things that are really fundamental to our customers. an LLM isn't going to do that for you. That has to be done in advance, then maybe we prepare it in a certain way and then expose it to customers - that's something that could be interesting.

Risk vs reward

Welbourne is honest that working in AI at this moment in time is challenging, because the field is moving at such a fast pace. He says that your plans can be disrupted every few months when a new model comes out and that Samsara invests a lot in keeping up with what’s coming. Welbourne adds that it’s an “exciting and somewhat scary time”, but that Samsara’s focus is on deploying AI technology in appropriately constrained ways, within scope, using AI models that have been evaluated effectively, and with an appropriate level of human oversight. 

Consequently, managing risk is an inevitable part of the job description. And as noted previously, Samsara has an ethics and privacy committee that reaches across its leadership team. Welbourne adds that Samsara’s approach to risk is calculating the balance between priority and opportunity - thinking through: what are the risks it could incur when using a particular type of AI technology? He explains: 

A lot of our approach to risk, once we've decided that there's a use case that's really high value, is around how we design an AI solution. How we evaluate, for example, a computer vision model or a new machine learning model. How do we evaluate that before we put it into production? There are various stages where we're iterating through testing on different data sets and rigorously looking at the performance. How does it behave in various situations? We do all that before we launch to customers. And then even after we have launched, we continue monitoring behavior and performance. That's probably the key way that we stay on top of risks.

Welbourne says that the field of AI is moving so far that month to month there are new risks that he and his team have to think about. However, in terms of managing unintended consequences, Samsara often uses customer feedback to adapt. He adds: 

When we talk about unintended consequences, it's usually just failures in performance of the model. So the model is not necessarily doing the wrong thing, it's just not doing what it's supposed to with the highest accuracy. For example, we may miss a particular case of a driver wearing a seatbelt, or something else to that effect. 

We may learn about it from our own monitoring, but we may hear from customers that in their particular use case, in their scenario, we've missed an instance of a safety event that they're concerned about. And in that situation, we take their feedback, we triage and understand what happened, and then we try to improve the model from there so that we don't make the same mistake next time.

My take

Something I always think about in how companies use AI is that often caution can be a competitive advantage. People worry about missing the boat when it comes to rolling out new technologies - which can sometimes be true. But when considering the risks that are associated with AI, it’s also true that the considered application of a tried and tested technology that is delivering maximum value to customers could be more advantageous than something rushed. 


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK