2

Thomson Reuters on Using GenAI to Augment the Professional Workforce

 5 months ago
source link: https://www.informationweek.com/machine-learning-ai/thomson-reuters-on-using-genai-to-augment-the-professional-workforce
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Thomson Reuters on Using GenAI to Augment the Professional Workforce

Shawn Malhotra, head of engineering, discusses using a GenAI platform to transform work in legal, compliance, and other fields.

It is not a state secret that AI, generative AI in particular, is the shiny technology of the moment -- perhaps the generation -- that enterprises want in their toolbelts. As the market looks to move beyond the freshman stage of GenAI, information conglomerate Thomson Reuters is finding uses for the technology in professional development.

Thomson Reuters has its hands in a host of sectors from legal to compliance and media, and earlier this year, it promised to invest $100 million on an annual basis in AI with a focus on professions in legal, accounting, global trade, and compliance.

In this episode of DOS Won’t Hunt, Shawn Malhotra, head of engineering for Thomson Reuters, discusses how his organization works with its GenAI platform, with an intent on transforming how work gets done in such spaces.

This seemed to be the year of AI, GenAI specifically. For Thomson Reuters, did the rise of this technology, the emergence in the public face of things, did it come out of left field or were you already looking in this direction to some degree before the proverbial hype started?

Yeah, it’s a great question. So artificial intelligence has been at the core of what we do for quite some time. You know, we like to talk about how we’ve been deploying AI solutions for over 30 years at Thomson Reuters to help legal professionals, tax professionals, compliance professionals. And we’ve had our eyes on large language models for quite some time because they had a lot of promise. Up until about GPT 3.0 we thought they were promising, but not quite there. When we would test them, we would try them on our applications for our customer applications and they just weren't quite meeting the mark. They were good, but not quite there. So, they were on our radar, but I think what caught us by surprise -- and I think a lot of folks by surprise -- was just how quickly they got there, from GPT 3.0 to 3.5 and now we see with 4.0 the rate of improvement was pretty staggering and that’s really opened up a lot of possibilities that we’ve been going after now. So, I’d say the technology itself didn’t surprise us; it was on our radar but the pace at which it improved was certainly surprising.

Related:The IT Jobs AI Could Replace and the Ones It Could Create

Listen to the full DOS Won’t Hunt podcast here


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK