2

Does the enterprise have a fake news problem - and will generative AI make it wo...

 8 months ago
source link: https://diginomica.com/does-enterprise-have-fake-news-problem-and-will-generative-ai-make-it-worse
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Does the enterprise have a fake news problem - and will generative AI make it worse?

By Jon Reed

January 3, 2024

Dyslexia mode

teacher-angry

The first time I asked if the enterprise had a fake news problem was in 2015. The answer has never been a resounding yes or no.

The good news? We work in an industry where our careers are defined by the success of complex, high stakes projects. We simply can't afford to get duped too often.

On the downside, vendors with boatloads of marketing spend try to lure our wallets into enthusiastic purchases of next-gen tech, aka shiny new toys - whether or not those toys have proven their merits, or are a good fit for our project.

Yes, software vendors want to loosen those project budgets - but completely false techno-claims are rarely useful, and tend to backfire. "Land and expand" might not sound that fun for a customer, but at least it implies the need to win the buyer over with results. Therefore, this is not exactly a fake news problem; it's the danger of too much slanted, incomplete or uncritical coverage. And yes, tech media is accountable here too, diginomica included.

Will generative AI blur enterprise reality?

The question du jour is: will generative AI make this problem worse?

  • Will the rampant use of gen AI tools result in misleading enterprise content?
  • Will generative AI make it harder to extract the signal from the noise?

As we head into 2024, most of the early gen AI use cases are internal, like the "co-pilots" cropping up in every keynote (though internal use of gen AI in coding would eventually get pushed to external-facing products). That's understandable, when you consider the price of mistakes for externally-facing gen AI content can be significant - especially in regulated industries. That means we won't see as much externally-facing gen AI content yet, though we will see examples of heavily-controlled/heavily-trained output. Here are some examples of early gen AI external use cases:

  • HR job descriptions - just about every HR tech vendor is geeked up about this one, though Brian Sommer has a few thorny questions. Most of these will have some quality control/human review at first.
  • Customer-facing gen AI chatbots - we won't see too many of these yet, but they are coming. I already published a use case on a successful gen AI service bot project, and in a regulated industry to boot.
  • Marketing copy, such as press releases and blog posts - and, in the pondscummier part of the pool, SEO gimmick pages posted simply to game search engines and win advertising dollars, regardless of the content's accuracy or value (this type of content likely won't be fact-checked, though that will lead to embarrassments for reputable publications).

The marketing copy use case is a real shame, because generating blog content is really a bland-as-heck use case. When it comes to content generation, gen AI is far more compelling in the case of summarization (e.g. white papers to Powerpoint), or multi-modal (text to animation, text to image). We'll see some of these examples in the enterprise, but I think it will be another year before we see things like well-executed animations from human text hit the enterprise. I did one poking fun at generative AI toothbrushes:

So, we'll see more bot article copy this year - but the gen AI copy will be on the mediocre side - and that's not good enough for the attention enterprises are competing for (see: Can AI displace content creators? For B2B content, the answer is no - but with a disconcerting asterisk).

Will some of this published copy be inaccurate? Perhaps at times, but most enterprises will have humans-in-loop for final fact checks. Enterprise gen AI copy won't be a hotbed of inaccuracy, but a tedious flow of overconfident, buzzword-drenched mediocrity. The exceptions will be "helpful" content like expert FAQs, job descriptions, and narrowly-trained service bots. That content will be useful, but it's not really part of the content that we would worry about from a fake news standpoint.

That won't stop marketers from trying to win us over - and if they think AI-generated content will help, they are sure to try it. If we're not careful, we'll find ourselves sinking big dollars into unproven solutions, long before they are mature enough (see: early blockchain adopters). We risk losing track of the people, process and data issues at the heart of our problems.

The necessity of puncturing hype balloons is here to stay - and gen AI doesn't change that. But: we can all become savvier at evaluating data and questioning vendor hype. That will serve us well whenever we have the misfortune to run into buzzword-laden, bot-generated text. Last time around, I detailed two ways of doing that:

  • sharpen our BS filters
  • break out of our filter bubbles, into more open conversations - discussions that challenge our tech assumptions with expert perspectives.

Sharpening those BS filters - what we're up against

I've now updated my top issues that prevent enterprise clarity:

  • Financially funded or vendor-biased stories tend to get disproportionate exposure on social networks.
  • Lack of disclosure can obscure the financial ties between "research" reports and media coverage.
  • Wall Street regularly misunderstands enterprise software, causing fluctuations that are not accurate to the long term health of the vendor.
  • Fast-moving stories, such as the aforementioned OpenAI rollercoaster, can be further obscured by vendor PR campaigns, or social hashtag frenzies.
  • The big tech news outlets are primarily chasing eyeballs/ad revenues, and therefore cater to what you are most likely to click on - regardless of whether the article gives you a better context for your project.
  • Social networks are not a content meritocracy - people often share the content that they hope will advance them professionally. Meanwhile, some of the best enterprise analysis can get buried in busy social streams, or lost in bursts of tech event news.

Carl Sagan's fine art of baloney detection - and how it can help us

No way around it: we're going to need better enterprise BS filters. A couple years ago, I ran into one of the best BS detecors I've seen: scientist Carl Sagan's Baloney Detection Kit. Sagan honed his BS detector amidst scientific inquiry. In their Sagan piece, authors Brain Pickings and Maria Popova pulled Sagan's rules from his book, The Demon-Haunted World: Science as a Candle in the Dark, via the chapter "The Fine Art of Baloney Detection."

Here's nine rules of scientific inquiry Sagan believes we can apply to everyday life - along with my quick takes on the enterprise relevance of each.

Wherever possible there must be independent confirmation of the “facts.”

Yep - never trust a single source on any issue exclusively - build an information network that cross-checks itself.

Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.

Indeed. We can do this by seeking out communities with diverse constituents, where issues are debated openly by those who know their briefs.

Arguments from authority carry little weight — "authorities" have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts.

Push back on "enterprise guru syndrome." Put ideas and tech to the real world test.

Avoid reliance on one vendor or services partner, no matter how much we're invested in them, or their platform. Check: my diginomica series on why independent consultants matter.

Try not to get overly attached to a hypothesis just because it's yours. It's only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don't, others will.

Rejecting easy/lazy enterprise narratives is a good start. "This vendor is legacy", or  "Customers love this SaaS vendor" - simplistic sentiments don't help us. Most vendors have strengths and weaknesses; one size never fits all.

Quantify. If whatever it is you're explaining has some measure, some numerical quantity attached to it, you'll be much better able to discriminate among competing hypotheses. What is vague and qualitative is open to many explanations. Of course there are truths to be sought in the many qualitative issues we are obliged to confront, but finding them is more challenging.

The biggest gap between Sagan's tips and our industry. So much of the quantified "research" in the enterprise market is vendor-funded. This doesn't mean it's useless, but the data can be framed in self-serving ways. Cross-checking reports from multiple sources helps. As do peer-based discussions to sense test results. Three more from Sagan:

If there's a chain of argument, every link in the chain must work (including the premise) — not just most of them.

Occam's Razor. This convenient rule-of-thumb urges us when faced with two hypotheses that explain the data equally well to choose the simpler

Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable are not worth much... You must be able to check assertions out. Inveterate skeptics must be given the chance to follow your reasoning, to duplicate your experiments and see if they get the same result.

"You must be able to check assertions out." Yes. Seek out those who challenge your views. Find those who know their stuff, and tell you what you don't want to hear. Build a network of specialized advisors/friends/associates across companies and roles.

My take - gen AI will test our enterprise BS filters

Enterprise software is not an impulse purchase. There is plenty of informed content, including peer review sites, that give a more realistic view before a deal can be closed. There is a reason why we published a d·book on the B2B Informed Buyer.

I think what we're all after, really, is an enterprise context - one that helps us absorb data and apply it.  We want a context that is flexible enough to shift quickly, one that can wade through noisy news cycles - and one that balances a hefty dose of skepticism with curiosity for what proper/bold innovation can accomplish.

Does generative AI dramatically change this? I don't think so. If we form discerning networks of smart colleagues and hone those BS filters, I like our chances. But if we aren't diligent, generative-AI based content might make our windshield a bit blurrier.

I'm much more concerned about the impact of gen AI on our civic/political discourse, where deep fakes have already made it difficult to properly fact check - even if you are determined to do so. But I don't see deep fake technology and completely false narratives being set loose on the enterprise in the same way - not yet. Still, there is a reason why diginomica pledged not use any generative AI in the creation of our content (soon after, The Financial Times made a similar pledge - not many others have done so).

Some questioned whether this meant diginomica would miss out on gen AI innovations. Not at all; we actively investigate the use of AI on many aspects our work - and we've documented hundreds of interesting AI use cases (I picked ten keepers in my 2023 Year in AI Use Cases review). Just because we don't use gen AI tools for writing doesn't mean we will automatically be granted reader trust - nor would we expect that. But we took this stance because we believe when it comes to editorial, consistently earning that trust is everything. We hope that it opens up exactly the type of dialogue I am instigating here.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK