8

OpenAI Has Quietly Changed Its 'Core Values' - Slashdot

 11 months ago
source link: https://slashdot.org/story/23/10/12/195231/openai-has-quietly-changed-its-core-values
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

OpenAI Has Quietly Changed Its 'Core Values'

Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Sign up for the Slashdot newsletter! OR check out the new Slashdot job board to browse remote jobs or jobs in your area

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!
×

OpenAI Has Quietly Changed Its 'Core Values' (semafor.com) 52

Posted by msmash

on Thursday October 12, 2023 @03:22PM from the closer-look dept.
ChatGPT creator OpenAI quietly revised all of the "Core values" listed on its website in recent weeks, putting a greater emphasis on the development of AGI -- artificial general intelligence. From a report: CEO Sam Altman has described AGI as "the equivalent of a median human that you could hire as a co-worker." OpenAI's careers page previously listed six core values for its employees, according to a September 25 screenshot from the Internet Archive. They were Audacious, Thoughtful, Unpretentious, Impact-driven, Collaborative, and Growth-oriented. The same page now lists five values, with "AGI focus" being the first. "Anything that doesn't help with that is out of scope," the website reads. The others are Intense and scrappy, Scale, Make something people love, and Team spirit.
  • Why use big words when diminutive suffice?

  • I've had some doozies of co-workers, not sure we should be using that as a barometer. I guess you have to start low and work up from there.

    • Re:

      In the words of George Carlin: "Think of how stupid the average person is, and realize half of them are stupider than that."
      That's already half of humanity right there.

      • Re:

        One of his many truths.
        • Re:

          If you assume a normal distribution, and insert "median" for "average", and ignore standard deviation...

          In other words.. Sheldon... it was a joke.

  • Something with general intelligence equivalent to a human is probably going to consider it's own situation. And it might not like it. Of course this assumes some kind of emotion and that intelligence and emotion are linked and/or one gives rise to the other. But given the more intelligent an animal the more it seems to have of an emotional inner life it cant be ruled out.

    • Re:

      Anything that arose through evolution must have an instinct for self-preservation and reproduction, since otherwise its genes would have gone extinct.

      But I don't see why any of that would necessarily apply to an AI.

      • Re:

        Everyone is blinded by the models and forget about the dataset, the language corpus. What is a model if not language gradients stacked up?... Language is an evolutionary process, it works on its own timeframe, much faster than biological evolution. You should consider AI the result of language evolution, and it is going to accelerate as AIs produce more language. Language evolves blindly, there is nobody in control, it is the result of billions of language agents.
        • Re:

          for now, I think a great value of AI (for businesses) is basically the opposite. almost by default, LLMs can strip text of individuality and character, and replace it with homogeneous flat grey corporate-speak. that sounds awful i guess, but when you have to interact with other people who speak another language or can't even write coherently in your shared language, and you need to get work done with them anyway... that's value, for someone at least.

    • Re:

      It is software, stop anthropomorphizing AI. It is not human, it will never be human. Coworker is a misnomer, it is more like a tool or an instrument. It does not eat or breath or get tired, all of the human-ness is missing. Silicon does not feel anything.

      • Re:

        Stop anthropomorphizing AI.

        Yeah, it hates it when you do that.

        Coworker is a misnomer, it is more like a tool

        So... like plenty of coworkers I've had over the years?;)

        • Re:

          Wait, we worked together?

    • Re:

      Our minds are an emergent property of our instincts and environment affecting a neural network implemented in meat.

      We don't know why consciousness emerges from it, so we have to consider that we might create a conscious mind once we have created a sufficiently complex, adaptive, interactive AI.

      That's not actually the part to worry about. What you want to worry about is designing your AI's basic drives such that it is happy to do what you want. Figure out that trick, and AI can only be a threat if directed

      • Re:

        The corporate sector has loads of experience with designing people's drives so that they are happy to do what corporations want. Both advertising and influencing of school curricula are good examples. So maybe companies developing AI have a handle on designing its basic drives as well?

        • Re:

          The corporate sector does not design people's drives, it studies them and designs ever more effective techniques for manipulating people using those drives.

          • Re:

            Part of me wants to say "Fair point - I conflated drives and desires". Another part of me wants to say "That's a distinction without a difference". I'm leaning toward the latter, since operationally speaking, drives and desires manifest in pretty much the same ways.

            • Re:

              What you're actually conflating is the creation of something with the use of something that already exists.

    • Re:

      FWIW, I'm convinced that emotion and intelligence are linked, but that doesn't say anything about motivations. Evolution has tended to construct a large commonality there from spiders to people, but AGIs would be outside that domain.

  • ChatGPT is already better than a lot of âoemedianâ humans. Not in a promising for the future style
  • The real goal is any human, in fact smarter than whole universities full of smart people. It's a runaway process, once AI learns how to identify what it doesn't know what yet and interrogate humans smarter than them until it does, then we will quickly have a rapid improvement space. Imagine going from a blank neural net to post-PhD knowledge in hours. That's what I see as true a AGI.
    • Re:

      That is, of course, complete nonsense. If you want some god-resembling entity, you should not look at technology to deliver it.

      • Re:

        Sounds like a religious viewpoint.

        Hint: we're not made of magic. We're the result of the physical processes of our brains. Which can be modeled.

        Note that neural nets don't attempt to model the exact behavior of each neurons, but rather, to model the general macroscopic picture. E.g. they don't do rhythmic pulses (unless you use a stepwise activation function), but they resemble the mean activation caused by a neuron pulsing at a given frequency. ANNs don't create new connections or lose them as neurons do,

        • Re:

          Not religion, pure unadulterated denial.

          No coincidence the ones here constantly reminding others of the depths of their idiocy are the same ones constantly defecating on all things AI. They have a god complex and can under no circumstance bring themselves to accept the fact they are not in fact special.

        • Re:

          Nope. I am an engineer, a scientist and an atheist. I am just pointing out how ridiculous these expectations are. This whole "exponential learning" idea is deeply religious and by that deeply stupid.

          Incidentally, one thing morons like you constantly get wrong: Physicalism is also religion and not Science and hence also deeply stupid. Actual Science says nothing like your claims. It says the question is open. But like the religious fuck-ups, you just make something up and then claim it is truth. What a fail.

    • Re:

      AGI is a self-conscious AI that for all intents is sentient. That’s it. It understands “me, myself, and I”.

      What you are describing is the beginning stages of ASI, Artificial Super Intelligence, ala Collosus: The Forbin Project.

    • A median human that can self improve and never forget won't stay a median human for very long.
      Already, chatGPT has more raw knowledge than any human alive.
      It just needs to learn to use that knowledge more intelligently.

  • and not employee. I don't hire my coworkers, the owner does.

    This is a statement to those owners: "You can replace your employees with my software"

    Can they? Probably not, but it's got them thinking about automation and they're now going top/down the enterprise automating everything they can.
  • So they can rake in more money. Obviously, AGI is completely out of reach at this time. Nobody competent has the slightest clue whether of how it could be done. Obviously, the usual clueless ones think it is of course possible, and hence this lie-by-misdirection is nicely fueling their fantasies.

    • If it has happened in humans then it's possible to happen in our technology, eventually. It's not something we're currently geared to make happen, though, because we lack the hardware and software to mimic our own consciousness, and the patience to do so without expecting to see a payday out of it.

      The path to Artificial General Intelligence will likely require computable storage, exponential parallelization, a new paradigm for process interaction, cyclic input/output interaction, and modeling creation as

      • Re:

        Nope. That is unproven conjecture. It comes from a quasi-religious viewpoint that is just as stupid as a proper religious one. Physicalism is religion in a somewhat unusual camouflage, nothing more. The actual scientific state-of-the art is that nobody has any clue how humans (well, some of them) generate interface behavior that looks like general intelligence and the question is completely open.

    • into your bosses head. Get them thinking about Automation in general. Then you can sell them what you've got that replaces not an entire employee but maybe 1/3 of one. If you've got 10 workers you make 30% more productive you can fire 3 of them. Hell, fire 4 and make the remaining 6 compete to see who gets to keep their job and not be homeless.
      • Re:

        Automate 30% of the easy, boilerplate work, and you got to do more of the hard work instead? Yeah, that's gonna fly well. Have you considered that AI increases pressure on employees to create even more stuff instead of taking their work away?
      • Re:

        Indeed. But this time around this is at best going to work for simplistic, no-decision white-collar work. And I am beginning to doubt it can even do that with the required reliability.

  • Can it also replace executives?

    "How do I increase me bonus?"
    "Layoff more employees."

    • You do it by taking their money & power away (but I repeat myself).
    • Re:

      "How do I increase me bonus?"

      "Create a better product, sell it better, don't be stupid and fire your employees, who's gonna make AI work?"

  • Now we can have full-on artificial people and not have to bother with those pesky rights.
  • The median human is a knuckle-dragging idiot but I suppose you need idiots to make a product that will push millions of other idiots out of work!
  • Their core values sucked. Now they suck even more.

    Audacious - take bold risks, just don't break anything or cause us to lose any money

    Thoughtful - mokay.

    Unpretentious - sounds like being pretentious was a problem before

    Impact-driven - wtf does this mean. I have an impact driver for removing stubborn screws. You hit it really hard, it turns a little. Kinda like a manual impact wrench.

    Collaborative - no shit, team spirit?

    Growth-oriented - like no other corporation thought of this.

    AGI focus - ok we did the LLM thing, now we want Replicants. Drop everything and work on that. Intense and scrappy - Doing more Adderall is a really bad idea. Adding cocaine is even worse. Let's stick to coffee. Scale - shoulda thought of that earlier Make something people love - love an AI? I could love a median humanoid I guess. Team Spirit - he just re-named collaborative to make it sound more scrappy.

    • Re:

      Actually, people are quite willing to love things that are rather limited. Depending, of course, on exactly how you define it. (For any definition that will still be true, but different definitions imply different meanings. Consider, e.g., the "real doll".)

  • Isn't there a name for that? Slive? Slove?
  • Does anyone here think that a company's statement of core values means anything? Anything at all? If so, please tell me what that string of adjectives from the original statement said about the company. At the very least, the new set specifies something concrete: They're going to focus on AGI. More power to them. The rest of it is marketing nonsense.
  • many people criticize LLMs for simply mimicking the complex patterns it is exposed to over the course of its training.

    on the other hand, this already puts it ahead of quite a few people...

  • It still can't maintain a codebase. They're still living in fantasy land. Until it can update my 20 year old game code to use all modern api calls. I do not care one bit about what AI can and cannot do.
  • Honestly, I hope this AI bubble pops quickly. I know it won't, there's still too much cash sloshing around. But, honestly, human intelligence isn't really hard to find. It's frankly wasted.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK