4

OpenAI Readies New Open-Source AI Model - Slashdot

 1 year ago
source link: https://news.slashdot.org/story/23/05/16/1526219/openai-readies-new-open-source-ai-model
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

OpenAI Readies New Open-Source AI Model

Become a fan of Slashdot on Facebook

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!

Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area
×

OpenAI Readies New Open-Source AI Model (reuters.com) 32

Posted by msmash

on Tuesday May 16, 2023 @01:20PM from the up-next dept.
OpenAI is preparing to release a new open-source language model to the public, The Information reported on Monday, citing a person with knowledge of the plan. Reuters: OpenAI's ChatGPT, known for producing prose or poetry on command, has gained widespread attention in Silicon Valley as investors see generative AI as the next big growth area for tech companies. In January, Microsoft announced a multi-billion dollar investment in OpenAI, deepening its ties with the startup and setting the stage for more competition with rival Alphabet's Google.
  • It is not intelligence, it is a Chinese Room (qv) faking of intelligence.

    But people are calling it AI.

    So... don't we need a new term for the actually attempts to build something actually intelligent ( and where AI research has been failing for decades and is still failing ) ?
    • Re:

      Oh, and how is that snippet in any way a SUMMARY of the story?!
      • Re:

        Did you read the story? It shouldn't even be a story. It's literally just a "yup, this is happening" statement, with no further information.

        I'd be extremely curios how crippled the open source version will be so that they can justify selling the closed source version. I'm assuming it'll be pretty "last generation" until the next iteration of ChatGPT comes along and they can push out the hobbled old version for the open source crowd.

      • Re:

        The report said, "OpenAI is unlikely to release a model that is competitive with GPT."

        How about you?
    • Re:

      It's too late. AI was grabbed by the marketing departments to demark this "somewhat more complicated than a query" LLM model. When/if we ever reach anything approaching "real" AI, they'll coin a new term for it. And be as stupid as they always are and call it something extro-idiotic, like "AI+" or "AI 2.0" or something.

      • Re:

        The industry standard term for this kind of work is "AI", and has been since the 1950s. Eventually the goal is "general-purpose AI', which is the term you're looking for, greytree. Industry standard is that any application that can make even a poor attempt at the Turing Test is AI. It may be substandard IQ, but it is AI.

        • Re:

          Except that, back in the 50s, it was noted that an obvious disproof of the Turing Test as a test of intelligence was the Chinese Room argument. That I mentioned above.
          • The Chinese Room Argument didn't exist until 1980. I couldn't have been "noted" as "an obvious disproof of the Turing Test" in the 1950s by anyone but a time-traveler. As a "disproof" of the Turing Test, it works well enough, but it's an attack from the side. (Searle is attacking computationalism, not behaviorism.) It also comes way too late. Weizenbaum had put the final nail in the coffin back in the 60s with ELIZA, much to his annoyance.

            The Turing Test was hardly something that needed to be 'disproved' anyway, even in 50s. I don't know that anyone, even Turing, thought it was unassailable. You'll find no end of contemporaneous criticism about it. Remember, Turing thought the question "can machines think" was meaningless and proposed the imitation game as an alternative.

            I think it's fair to call programs like ChatGPT "Chinese Rooms" though it's worth pointing out that Searle's description affords the Chinese Room more computational power than a transformer model can manage.

            Oh, the term you're looking for is AGI or "Artificial General Intelligence". If you're interested in how we got stuck with such a misleading term in the first place, Pamela McCorduck has an excellent history in her book Machines Who Think. Don't let the provocative title put you off. The blame for that lies with the publisher, not the author.

            • Re:

              You mean you haven't seen Back to the Future (1985) ?!

              ( I sit corrected. )
        • Re:

          General-purpose AI isn't it either. A general purpose AI is just shifting the problem by a dimension. When people talk about AI they are and always have been talking about an artificial sentient lifeform. Simply increasing the complexity of tasks that a mindless automaton can solve doesn't get you there because it does not and never will have an inner monologue or internal observer. That is also what everyone (outside the last few years when they've been chasing money or cross domain programmers) dream abou

          • Re:

            > does not and never will have an inner monologue or internal observer
              That is one hypothesis. The other is that humans don't have an inner monologue other than the feedback loop of states. It may very well be that artificial sentient lifeform is indistinguishable from advanced chatbot precisely because human sentient lifeform is nothing more than a biological implementation of an advanced chatbot, and it is from that hypothesis that chatbots are named AI.

      • Re:

        AI++
        I think that works.

        • Re:

          Maybe we should call it AI 2000

      • Re:

        They don't need a new term. This so called "fake" AI is still AI in the sense that it does mimic intelligent behavior, even if it does so through entirely unintelligent means.
      • Re:

        No, something that is artificial is real and made by humans. This intelligence doesn't exist.

        • Re:

          And yet it does things we would consider impossible just a few years ago.

          • Re:

            So does everything invented in the past few years. Or rather it does things SOME considered impossible a few years ago. The thing I thought impossible for this technology a few years ago is still impossible, sentience, aka intelligence. I have a frog which outperforms the potential of this technology and that thing is damn is almost as close to a robot as a lifeform gets.

    • So... don't we need a new term for the actually attempts to build something actually intelligent ( and where AI research has been failing for decades and is still failing ) ?

      We already have that term. It is Artificial General Intelligence. The term has been used since the 90's. Another term for what you are describing is Strong AI.

      • Re:

        AGI just attempts to shift the complexity, the same thing we have but covering more topics. That isn't the same as an attempt to create an actual artificial sentient intelligence.

        • Re:

          Intelligent != Sentient. Ant colonies exhibit intelligent behaviours when viewed as a single entity, yet they don't have sentience as we understand it; it can be the same for automata built on silicon.

          A system that adapts to its environment through complex, context-aware processes is intelligent even if it doesn't have an "inner voice" that translates those processes into words. We would do well to separate both concepts, because they are not the same, and one does not entail the other.

          • Re:

            "Intelligent != Sentient"

            Yes, it does. An intelligence is a sentient being. It doesn't matter how many terms people within the field use to try to get around the fact they don't have a clue how to make one. They do not define 'artificial intelligence' as those words already mean something. An artificial intelligence is an artificial sentient being, an intelligence, produced by artificial means. Everything else produced by the field is not AI but rather the failed attempts to produce an AI.

            "A system that ada

    • You don't really need to advertise your ignorance of a major computer science field in every story.

      • Re:

        AI isn't the property of a major computer science field. It is a general term used by the public and the disparity in concepts is intentional in order to aid in funding of development of failed efforts and marketing of the products of those failed attempts.

      • Re:

        You don't really need to advertise what a hateful cunt you are in an irrelevant and pointless reply to every comment.
    • Re:

      Nuh-UH! ChatGPT uses the letter "L" all the time.
    • In other news, the internet is just a fax machine, amirite?

    • Re:

      An artificial system that can mimic intelligence is still AI, even if it has no actual intelligence to speak of. Computer chess programs beat grandmasters, but these chess programs do not think either, but they are also called AI. This is not a misapplication of the term, AI is a field of ongoing research, and the fact that it doesn't necessarily refer to something that might "actually" be intelligent is irrelevant. Eliza was AI too.

      Clearly, "AI" covers a spectrum that includes what one might also c

  • I doubt ClosedAI is at all impressed by open source efforts, including LLama. Claude however is a much bigger threat.

    Going half in on being commercial was a temporary solution at best. Sure they can pay researchers a little more, but the charter prevents them from tying them down the way commercial companies can. Anthropic got billions for walking out the door with their knowhow, they can't pay researchers enough to prevent that and NDAs, trade secrets and patents are poorly compatible with their charter.

    • Re:

      They wouldn't need to if they dropped the commercial angle and simply opened up what they are working on. There are plenty of companies that would be happy to pay their researchers just to keep working on the open solution.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK