4

Enterprise hits and misses - Zoom stirs an AI privacy controversy, ESG has teeth...

 1 year ago
source link: https://diginomica.com/enterprise-hits-and-misses-zoom-stirs-ai-privacy-controversy-esg-has-teeth-and-open-source-hits
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Enterprise hits and misses - Zoom stirs an AI privacy controversy, ESG has teeth, and open source hits a commercial crossroads

By Jon Reed

August 14, 2023

Dyslexia mode

success-failure-road-for-businessman

Lead story - Generative AI - reassessing the risks and use cases

I return from vacation, only to find - more generative AI. But then again, I do welcome the precision of critical analysis. Neil kicks that off with Generative AI in the enterprise - re-assessing the risk factors. Neil defines four fundamental AI tensions that organizations must contend with:

  • Deploying models for customer efficiency versus preserving their privacy
  • Significantly improving the precision of predictions versus maintaining fairness and non-discrimination
  • Pressing the boundaries of personalization versus supporting community and citizenship
  • Using automation to make life more convenient versus de-humanizing interactions

In theory, "AI ethics" should help - but doesn't it seem like AI ethics is always lagging behind systems in production? Though Neil has been sharply critical of the problematic field of "AI ethics," he notes some promising developments, including new approaches to operationalizing AI ethics.

Of course, the challenge with generative AI is we can't assess the live enterprise use cases yet. However, George covers one that should go live in 2024: Elsevier wades into generative AI - cautiously. Elsevier has opted not to build its own LLM; it will license ChatGPT instead. But as George writes, this year is all about ensuring good results to research queries:

Elsevier is starting small with an alpha release of the new AI capabilities and taking advantage of its existing citation search engine, knowledge graph, and custom ontology to ground ChatGPT’s results to a chain of trust. This builds on the firm’s previous work on Small Language Models and graph data we covered in March.

Elsevier is also limiting the hallucinatory downsides of ChatGPT by putting a semantic search engine underneath it. He quotes Elsevier:

Using the query that the user types in, we're firing that into a semantic search engine and getting back the list of results. And we're using that, in addition to the query, to prompt the LLM to give essentially a summary. So we're essentially using the LLM as almost the natural language interface.

So when you get the results back, you actually get the references from Scopus that support all of the summary statements that come up in the summary. So that obviously reduces the risk of us making up references because it's very hard to make them up when you've essentially returned them from a search engine.

This strikes me as a well-thought approach to getting the most from an off-the-shelf LLM, while limiting its downsides. But I would also point out that: 1. this organization has considerable data and semantic assets to make this happen, and 2. when an LLM is just part of a well-designed mix, I think you would call this a progression of enterprise tech, not a revolution.

I realized that harshes the buzz of exuberant AI marketing teams everywhere... Just be glad I didn't get a chance to weigh in on George's Why we need to treat AI like a toddler - OWASP lists LLM vulnerabilities (that one came out while I was on vacation; Alex Lee reeled it into their guest edition of Hits and Misses last week).

diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:

Jon's grab bag - Mark filed an instructional hybrid cloud use case, Hybrid cloud gets the job at UK’s Department for Work and Pensions. Finally, Em filed a terrific green AI piece via AI for plant breeding - the new green revolution?

Best of the enterprise web

My top eight

Zoom Faces Challenges in Navigating the Age of Generative AI – Amalgam Insights's Hyoun Park issued a definitive examination of Zoom's new controversial terms of service. But Park is right; this issue extends far beyond Zoom:

On August 7, 2023, Zoom announced a change to its terms and conditions in response to language discovered in Zoom’s service agreement that gave Zoom nearly unlimited capability to collect data and an unlimited license to use this information going forward for any commercial use. In doing so, Zoom has brought up a variety of intellectual property and AI issues that are important for every software vendor, IT department, and software sourcing group to consider over the next 12-18 months.

In my view, Zoom made numerous mistakes here. As I said to Park on Twitter:

Yes well, to your point, vendors may be able to get away with this kind of tactic on the consumer side where data takeaways are built into the TOS, but I don't believe it's going to fly on the enterprise side, and companies with enterprise ambitions need to do better than this

— Jon Reed (@jonerp) August 13, 2023

Zoom did issue some clarifications on this policy subsequent to Park's post. But as per ZDNet, that may not be enough: Zoom is entangled in an AI privacy mess. Zoom may have stepped it in this time around, but Park is right to extend this issue beyond Zoom:

This is going to be a wild ride over the next year. I’m really looking forward to seeing how @diginomica keeps documenting this shift in software expectations. Between AI shifts & Hashicorp becoming less open source this week, software license agreements are quickly changing.

— Hyoun Park (박현경) (@hyounpark) August 13, 2023

For more of my commentary, click on Park's tweet above. As for Park's note on open source, that brings us to our next pick:

Whiffs

Via Alex Lee, I guess we have to move recipes off the harmless generative AI use case list for a little while:

"the perfect nonalcoholic beverage to quench your thirst and refresh your senses” - chlorine gas recipe nice!

-> welcome to the generative AI "revolution" https://t.co/0lEKPrcA2K

— Jon Reed (@jonerp) August 10, 2023

So I whiffed a bit here:

AI-powered gamification is one of the most abysmal tech "trends" of all time. In a local group, so-called "top contributors" awarded by Facebook are typically amongst the lowest-value, full of viral wiseass + lack of helpfulness.

As an admin I have no control of it - awesome

— Jon Reed (@jonerp) July 27, 2023

Turns out, buried in the bowels of Facebook, I did have some control over these settings for the local group I run. With foot removed from mouth, I stand by the tweet; with a bit more flexibility to create our own badges, we'd actually have something kinda fun. The "top contributor" group badge is brute force - a top contributor, by Facebook's definition, is just a volume award. Whoever posts the most gets the nod. I shouldn't have derived nearly so much satisfaction by turning it off, thereby taking all the top contributor badges from active members away, but I did. Reserve a spot for me in purgatory...

We'll close with my news article title of the week: Florida village terrorized by peacocks plans to use vasectomies to solve the problem. If that seems like an indirect solution, bear in mind they plan to give the vasectomies to the peacocks - at least, I think that's the plan. We'll have to check back in on that one... See you next time.

If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK