0

Sphere, Meta’s new AI brain, applying UX Design | UX Collective

 2 years ago
source link: https://uxdesign.cc/how-sphere-metas-new-ai-brain-can-perform-better-using-ux-design-cd76a4005e25
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

How Sphere, Meta’s new AI brain, can perform better using UX design

How might we apply affordances and signifiers to the future of AI’s zero-shot learning procedures?

Sculptor and his work
By Ilia Zolas from Unsplash

They are a clinging vine: large language models (LLM) are no longer alone in a crowd. I will never forget the very moment I personally observed an LLM respond to a user in another language, a translation use case that was based on a question received from one language to be answered in another language. Certainly not a simple translation action. Most certainly not a novelty that was solved decades ago.

I knew we had come a long way with few-shot learning, but this translation matter was off the beaten track. It took me several months of attempting to unhinge and unravel what was going on with the algorithmic infrastructure of this particular LLM. At the final whistle, I quite literally stumbled across what was going on:

The model was applying zero-shot learning procedures [6].

Simply, if you want a machine to learn from data, you need to provide it. If you don’t have enough of it, the machine will have to find other ways to learn from data. Thus, the focalization of zero-shot learning is the following point of union:

Zero-shot learning is when a machine is taught how to learn from data without ever needing to access the data itself, while few-shot learning is when a machine is taught how to integrate data for learning from a specific point of view [6].

Sphere and zero-shot learning

There are three challenges arising from written text. For one, who is the author: what is their rooted (like tradecraft, academics, or “lived experience”) foundation that inform their authorship? Two, the written text: words and context inform narratives, which can very easily oscillate between ranges of opinions and accuracies. Finally, and third, is citation.

Many can write, and many can have some rooted foundation. The intersection of written text, and to the degree that specific citation is included, apply to the aspiration of Sphere. This almost sounds like just about every conversation or topic we have with people: as much as we make statements, how often are they based on specific sources?

Accuracy and precision, an approach to validation, have an essential role in daily conversations in what you and I say, namely in what we share with others or how we convey perspectives. I find that attempting to apply this type of approach to Sphere in a quantitative manner as a moonshot idea or a use case to work towards is not only a good frame of mind but also a golden opportunity.

A robot shaking hands with a human
Created by the Author using Dall E-2

Empirical reasoning

We are not really learning the ropes anymore:

Quantifying empirical reasoning is coming.

I know this is for two reasons: (1) I create and deploy zero-shot learning algorithmically on a daily basis and can speak as a primary source on this matter, and (2) there are people who are burning the midnight oil in optimizing this end-state as we speak.

Sphere problem set: why Wikipedia as the decomposed use case?

The potential for identifying to what extent one citation is superior to the other is hitting the jackpot to hinge across the larger problem set of empirical reasoning evaluation.

Take a look at how massive the Wikipedia ecosystem is — the following are the metrics Wikipedia uses to measure its growth [7]:

— Number of articles

— Number of words

— Number of pages

— Size of the database

As of 8/5/2022, there are 6,560,476 articles in the English Wikipedia containing more than 4 billion words (Wikipedia states that each article averages at about 636 words, applicable to the corresponding date of 8/5/22 as identified here.)

A general growth measurement is 17,000 articles a month [7].

Thinking analogously: ratios matter. If, for an experiment, you apply your algorithmic approach to an ecosystem ripe for a gold rush, you lay it all on the line for a true knockout. Demonstrating against Wikipedia data, to the extent that citations embedded apply to their corresponding compositions, sets the head and shoulders construct of what is to come. How many of you put in a request to modify certain text on Wikipedia due to errors? Now, automate this to where no sizable army of humans could potentially scale and excel at the likes of AI.

A robot and a person going for a walk
Created by the Author using Dall E-2

User experience design

1. Information accuracy tooling

Google search engine is a product: it permits us to conduct information retrieval. Because of the body of work that began, accelerated with empirical results around zero-shot learning, the next wave of technology productization is now across information accuracy.

If it is demonstrated, to the extent that an algorithm can assert a range of precision for accuracy on a small dataset (in comparison to the world wide web), in the context of citations on Wikipedia with regards to their corresponding authorship, one can potentially automate how to measure precision and accuracy for any type of data.

KI-NLP, or knowledge intensive natural language processing, is a way to label Sphere’s new problem space. In the context of zero-shot learning, its future implementations is predictable: improve in how to learn and make predictions about new data classes without having seen any training data from those classes. Also, learn [6] about novel concepts by transferring knowledge to use cases from other fields.

2. Chatbots

If information accuracy is quantified against empirical notations, we could design a chatbot to potentially provide a better-curated, more autonomously informed answer. Currently, the supermajority of chatbots needs to think in accordance with a set of procedures based on (1) bodies of knowledge and (2) fields of research. Largely, the current efforts are implemented from few-shot learning. Zero-shot learning is new to chatbots — by now, you would recognize that it is new to many fields of productization and industries.

As we begin to traverse across ethical boundaries of AI, as with any deployment for going to market, guardrails need to be integrated.

states,

“Just because something is possible doesn’t mean it should exist” [11].

What kind of a team and resources are we going to propose and operationalize to integrate compliance systematically, namely technical governance and risk auditing based on ethics and bias?

Robots going on a walk with humans
Created by the Author using Dall E-2

Parting thoughts: signifiers and affordances

The existence of an affordance [13][12] depends on the relationships between the object (like the product) and the agent (like humans, animals, or even machines) [10].

How might user experience designers optimize the harder-to-see, not-so-physical objects (the AI model) based on how the user interacts with them? Fundamental principles of interaction [10] are influenced by epistemology, which states, in one aspect, that cognition and emotion are tightly intertwined [9]. As AI grows in complexity, signifiers and affordances will inevitably lead the way for (1) revealing the essential attributes of the AI object’s good design (discoverability [14] and understanding [10]) and (2) actually producing the pleasurable user experiences, in demand by the user.

References:

UX Collective community:

11. Teixeira, Fabricio. (2018, December 31). When AI gets in the way of UX. UX Collective. https://uxdesign.cc/when-ai-gets-in-the-way-of-ux-17de95f40772

12. O’Sullivan, L. (2020, February 22). Understanding affordances in UI design. UX Collective. https://uxdesign.cc/understanding-affordance-in-ui-design-4b4ddbdd0b30

13. Teixeira, Fabricio. (2016, November 15). O que são Affordances? UX Collective. https://brasil.uxdesign.cc/o-que-são-affordances-9cff02103dc6

14. Teixeira, Fabricio. (2017, November 21). UX portfolios, writing products, discoverability in touchscreens, and more UX this week. UX Collective. https://medium.com/p/8711527e5ff4.

Others:

1. Barham, P., Chowdhery, A., Dean, J., Ghemawat, S., Hand, S., Hurt, D., Isard, M., Lim, H., Pang, R., Roy, S., Saeta, B., Schuh, P., Sepassi, R., Shafey, L. E., Thekkath, C. A., & Wu, Y. (2022, March 23). Pathways: Asynchronous distributed dataflow for ML. ArXiv.Org. https://arxiv.org/abs/2203.12533

2. Dean, J. (2021, October 28). Introducing Pathways: A next-generation AI architecture. Google. https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/

3. Introducing Sphere: Meta AI’s web-scale corpus for better knowledge-intensive NLP. (n.d.). Retrieved August 5, 2022, from https://ai.facebook.com/blog/introducing-sphere-meta-ais-web-scale-corpus-for-better-knowledge-intensive-nlp/

4. Piktus, A., Petroni, F., Karpukhin, V., Okhonko, D., Broscheit, S., Izacard, G., Lewis, P., Oğuz, B., Grave, E., Yih, W., & Riedel, S. (2021, December 18). The web is your oyster — Knowledge-Intensive NLP against a very large web corpus. ArXiv.Org. https://arxiv.org/abs/2112.09924

5. Grave et al. CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data. https://arxiv.org/pdf/1911.00359.pdf

6. Tilbe, Anil. (2022, August 5). Zero-shot vs few-shot learning: 2022 updates. Towards AI. https://pub.towardsai.net/zero-shot-vs-few-shot-learning-50-key-insights-with-2022-updates-17b71e8a88c5

7. Wikipedia:Size Of Wikipedia. (n.d.). Wikipedia. Retrieved August 5, 2022, from https://en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia

8. Awachat, S., Raipure, S., & Kalambe, K. (2022). technical review on knowledge intensive NLP for pre-trained language development. International Journal of Health Sciences, 9591–9602. https://doi.org/10.53730/ijhs.v6ns2.7510

9. Epistemology and emotions. (n.d.). Retrieved August 5, 2022, from https://books.google.com/books?hl=en&lr=&id=A2MA5ouse_cC&oi=fnd&pg=PA167&dq=cognition+and+emotion+are+intertwined&ots=byP86LoSwk&sig=KrYDdE83K979sMf0mar7syKtrjM#v=onepage&q=cognition%20and%20emotion%20are%20intertwined&f=false

10. Norman, D. (2013). The Design of Everyday Things: Revised and expanded edition. Basic Books.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK