4

Ask HN: What is the relevance today of Minsky's “Society of Mind” concept?

 1 year ago
source link: https://news.ycombinator.com/item?id=34100102
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Ask HN: What is the relevance today of Minsky's “Society of Mind” concept? Ask HN: What is the relevance today of Minsky's “Society of Mind” concept? 36 points by eigenvalue 2 hours ago | hide | past | favorite | 11 comments

Minsky wrote this book in 1986, towards the end of his very long career thinking about how to build intelligent machines. For a basic overview, see:

https://en.wikipedia.org/wiki/Society_of_Mind

You can find a complete pdf of the book here:

http://www.acad.bg/ebook/ml/Society%20of%20Mind.pdf

My question to the HN community is, has all this work become irrelevant given recent progress in machine learning, particularly with Transformer based models such as GPT-3 or "mixed modality" models such as Gato?

It seems to me that some of these ideas could make a comeback in the context of a group of interacting models/agents that can pass each other messages. You could have a kind of "top level" master model that responds to a request from a human (e.g., "I just spilled soda on my desk, please help me") and then figures out a reasonable course of action. Then the master model issues requests to various "specialist models" that are trained on particular kinds of tasks, such as an image based model for exploring an area to look for a sponge, or a feedback control model that is trained to grasp the sponge, etc. Or in a more relevant scenario to how this tech is being widely used today, a GitHub Copilot type agent might have an embedded REPL and then could recruit an "expert debugging" agent which is particularly good at figuring out what caused an error and how to modify the code to avoid the error and fix the bug.

I suppose the alternative is that we skip this altogether and just train a single enormous Transformer model that does all of this stuff internally, so that it's all hidden from the user, and everything is learned at the same time during end-to-end training.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK