6

Ask Slashdot: What Happens After Every Programmer is Using AI? - Slashdot

 1 year ago
source link: https://developers.slashdot.org/story/23/07/23/0037247/ask-slashdot-what-happens-after-every-programmer-is-using-ai
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Ask Slashdot: What Happens After Every Programmer is Using AI?

Sign up for the Slashdot newsletter! OR check out the new Slashdot job board to browse remote jobs or jobs in your areaDo you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!
×

There's been several articles on how programmers can adapt to writing code with AI. But presumably AI companies will then gather more data from how real-world programmers use their tools.

So long-time Slashdot reader ThePub2000 has a question. "Where's the generative leaps if the humans using it as an assistant don't make leaps forward in a public space?"

Let's posit a couple of things: - First, your AI responses are good enough to use. - Second, because they're good enough to use you no longer need to post publicly about programming questions. Where does AI go after it's "perfected itself"? Or, must we live in a dystopian world where code is scrapable for free, regardless of license, but access to support in an AI from that code comes at a price?

Remember when the internet was young and we all thought that within a few years, maybe decades, we'd all have access to all the information in the world and it would be awesome? Never again would it be possible to bullshit people into believing lies because they now can easily see just how they're being deceived. We thought that we'd all become accomplished philosophers, because we'd engage in meaningful discussions and the marketplace of ideas would sort out all the bad ones because people would latch onto those that mean progress and reject those they identify as superfluous.

Why do you think this is different, I have to ask? You think AI is any better at spotting people trying to fill it with false information, bully it, troll it and play havoc to its learning model?

  • Re:

    Recently a couple of lawyers have been busted submitting briefs written by ChatGPT to a court that cited fictional cases.
    When they are 'training' these AI models and fed them all sorts of information, did anyone bother to tell the AI which were fictional stories and which weren't? IMO the cited fictional cases were probably from some tv show or movie.
    • Re:

      If people can't tell fictional stories apart from real ones, as we can observe quite often, why do you think AI would be able to?

      • But people can, particularly when educated and trained be taught how to avoid disinformation. There are skills and tools to do so. AI can actually use the tools each time unlike humans who get tired of having to use things. Maybe there is hope
        • You haven't been using the internet for too long, have you?

          Or did you stop using the internet back when it was mostly a tool for the academia and only just now started using it again?

          If anything, we have the blatant proof at our hands that people not only cannot but mostly don't want to be trained how to avoid disinformation. It's way more comfortable to simply confirm already existing presupposed assumptions and "be right", no matter what bullshit you believe.

          Unless you can train AI to not enjoy having its biases confirmed (because that's pretty much the problem with humans here, we actually enjoy learning something, unfortunately we don't care if what we learn is actually true), it will do the same. And that's the thing, you pretty much have to give AI the requirement to "enjoy" (whatever this may mean for AI) learning something new if you want it to stay curious and learning.

          You'll end up with the same bullshit believing artificial idiot and you don't even replace the natural idiot, you just pile on.

          • Re:

            In my mind (and I will accept that I could be wrong), there is a difference between avoiding bias, and accepting all information to understand the bias. You seem to be comparing humans, who are limited in their total information input, with computers, that are able to take in much more input. Humans have to try to avoid disinformation, as it is distracting and wasteful. Computers can take it all in, and determine the closest approximation to the truth . . . at least in my mind.
            • Re:

              As with humans, it all depends on where you start. If you start out with garbage, they will seek out garbage to confirm their already existing garbage, rejecting factual information as wrong because it contradicts what it already knows as "true".

              You can't even show an AI that it is factually wrong because it has no senses.

              It has no way to determine whether is is fed bullshit. The best it can do is to take conflicting information and gauge that conflict against other information it has, try to weigh the qual

          • Re:

            I've been using the Internet for a long time and never stopped...

            But I think you're being ignorant with statements like "people not only cannot be...trained how to avoid disinformation" when studies show that people can be trained to avoid misinformation.

            I agree that many people don't want to do the work, but that is the exact difference with AI. It can, every single time, do the work that humans are unwilling to, which means it to can be trained to avoid disinformation and likely can do it better than

            • Re:

              I remember a report about an early attempt at AI (quite a few years ago, mind you) where the AI they trained was "genuinely" happy when introduced to the janitor of the facility because, according to the AI's standards, he must be a very, very special and invariably very important and interesting person. The reason the AI drew that conclusion was that they trained its knowledge about humans from celebrities and people of the academia, and since the janitor was the first person that didn't match either group

      • Because if AI is programmed to be correct. Then it is correct! (sarcasm)
        • Re:

          Actually, that shouldn't be sarcasm. ChatGPT was trained to write stuff *like* the stuff it encountered. The specific intention was that it not be the same stuff. (Of course, it's not perfect at that.)

          That the AI would always be correct is a very faint hope, but if it were designed to be correct, it would be correct a lot more often. It would (as currently designed) also be a lot worse at creative writing.

          • Re:

            You mean the "fuzzy logic" that is being used to make it feel less deterministic and more natural?

            I'm talking more about something that I've noticed sometimes being used in science fiction, where you have some society who has come to absolutely trust their computer by having it elevated to some kind of god level of infallibility with the justification that it was programmed to be infallible and thus everything that it puts out must necessarily be correct. And if there's evidence contradicts it, then that
            • Re:

              No. I mean it was intentionally designed to NOT say the stuff it had been trained on, but only to say things similar to it. This isn't "fuzzy logic", this is a design choice. It's an attempt to avoid copyright problems AND to seem more creative. But creative means less reliable when you don't have a solid base to test against.

      • Some things require accurate, precise, correct answers. The current crop of "AI" provide none of those.
    • Re:

      I think the main problem with it is that ChatGPT isn't and was never intended to be strictly truthful about things, and the lawyers goofed by assuming it was and not verifying the output. In theory, there's nothing keeping people from training an AI model on case histories and legal codes and the like and make sure it's not outputting fake or created info.

      I think we're still a ways away from being able to trust AI with whole ass briefs, with or without human verification, since if it's basing the brief off

      • Re:

        Any time an AI synthesizes an answer, even if its source material is completely factual, there's a chance it will be wrong. Remember, it is creating its output based on complex textual analysis, not actual reality.
      • Re:

        I'm sorry, but no.

        You're talking about ChatGPT, and imagining that it has "facts" in it, and theorizing that if it was "trained" only on correct facts, that it would output correct information. That is not how ChatGPT works - that's not what it does, at all. It contains NO FACTS, and there is no way to put facts into it. It will always and forever output "hallucinated" wrong information, in blatant and subtle ways. There is just no way to fix that, because that is the essence of what ChatGPT does.

        There are

    • It didn't give accurate summaries of fictional lawsuits, it fabricated everything. Here's an example: It created a citation for "Shaboon v. Egypt Air" complete with case number and selected quotations. There's no such lawsuit, either in reality or in a TV Show or Movie. If there was that's all anyone would be talking about, that it can't tell the difference between TV and reality. But that's not what happened. It "hallucinated" as the ML folks call it.

      You've got an inaccurate view of what this software is. ChatGPT is a Transformer. BASICALLY, it's a really big neural network with a few thousand inputs. Each input is a "token" (an integer representing a word or part of a word), including a null token. The output is a probability distribution for the next token. Because the input is null padded, you can pick a likely next word and replace it the next null with this word, and repeat. Since only part of the input changed, it can be chained efficiently and keep generating until it generates a special "End of Text" token is generated, or until all nulls have been replaced with tokens.

      That's the basics. Under the hood are a lot of moving parts. But an important component is a subnetwork that's repeated several times, called an "Attention Head". These subnetworks are responsible for deciding which tokens are "important" (This is called "self-attention" as the model is calling its own "attention" to certain words). This mechanism is how it can get meaningful training with so many inputs: You might give it 1200 words, but it picks out the important ones and predicts based largely on them. This is also how it can make long-distance references to its own generated text. Proper nouns tend to keep attention on themselves. Earlier techniques couldn't do that. The further away a word was, the less important it was to the next.

      So, it doesn't know about cases at all. It just knows e.g. if you ask about SCO v IBM, that those tokens are ALL important, and then it (hopefully) has been trained on enough descriptions of that case that the probability distribution shakes out to a coherent summary. Now if you ask for relevant case law and it hasn't seen any, it HOPEFULLY will say so. But, it's been trained on a lot more cases that exist than it's been trained on "don't know" refusals, so it can "hallucinate" (note that it now HAS been trained on a lot more refusals, which is annoying because it's now very prone to say things don't exist when they do). It knows the general form is "X v Y" so, absent any training indicating a SPECIFIC value for X and Y would be relevant, you'll just get a baseline distribution where it invents "Shaboon v. Egypt Air" because: It knows X should be a last name, and since it was asked about injuries during air travel, that the defendant would be an airline (and presumably it picked Egypt Air because generation is left-to-right, and it had generated an Arabic surname already). Now here is where self-attention gets really dangerous. Just like it would recognize SCO v. IBM as important in a user query, it will recognize Shaboon v. Egypt Air as important. Now this case doesn't exist, so the pretraining will not do much with that per se, but it's going to focus on those tokens. And, if asked for excerpts will generate SOMETHING related to a passenger being injured during air travel. Or, will say it doesn't know. It almost always says it doesn't know or that no such case exists. In large part that's because after the bad press ClosedAI has been very busy fine-tuning it on "I don't know" responses).

      Here's an example of it dealing with fictional cases. I asked it what the case was called in the Boston Legal episode "Guantanamo by the Bay". It said there is no such episode and I likely am thinking of fan fiction. I told it it's real, it's S3E22. It said of course, yes, it's the twenty-second episode of the third season, and is about Alan Shore arguing Denny Crane is not fit to stand trial due to dementia, but there are no case names mentioned. I told it that's wrong (but I didn't elabo

      • Re:

        I don't like it, as it's a stupid term, and I refuse to use it. The correct term is "malfunction". If you replace "hallucination" with "malfunction", the stories make a lot more sense. If you replace "hallucinated" with "malfunctioned", the stories also make a lot more sense.

        The AI people are a bunch of snake oil salesmen, and they do not deserve the respect of creating terms for the rest of us to adopt. Machine learning is interesting and useful, but AI is neither.

        • Re:

          "malfunction" implies that something went wrong. You might similarly consider it a malfunction when a car engine produces pollutants from its tailpipe, but most people would disagree -- in both cases, it's behaving as designed, and it's just that the design isn't really what we want it to be.

        • Re:

          "The uh.. artificial person MALFUNCTIONED, and a few death were involved."
          Weyland-Yutani coporate sleaze-speak at your service.

  • Re:

    I never heard of this claim, but I will accept that some people believed it at the time. You are coming to a conclusion which may or may not be accurate, from the original statement. We have POTENTIAL ACCESS to all sorts of information. If people are drawn to misinformation, then that is, by definition, LACK of information. I agree that you need to some sort of filter to determine the difference, but that is the purpose to this thought experiment. What happens when the AI is smart enough? I suspec

    • Well, my time with the internet dates back to when it was mostly an academic's toybox. Back in the days before AOL and before the Eternal September. The average internet user had a considerably above average IQ.

      AOL should have been a warning. Our main failure is that we ignored it. We let the masses in. We have nobody to blame but ourselves.

      • I was around the Internet around the same time as you and I must tell you that most (not all of them but surprisingly the majority of them) of those people with above average IQ were the most trollish and pompous asshat I ever met in all my life. So much for the Internet utopia.
        • Re:

          Yes, no, at least not really. We were pompous assholes, sure, but we at least contributed to the general progress.

          This here is just a cesspool circling the drain.

        • I was around the Internet around the same time as you and I must tell you that most (not all of them but surprisingly the majority of them) of those people with above average IQ were the most trollish and pompous asshat I ever met in all my life. So much for the Internet utopia.

          I've been on the net since the 1970s (first it was the ARPANET) and we discussed the future when, someday, regular people would have access to what we referred to as the "World Net". We kind of did think it would be more like an information utopia.

          That fantasy fully ended when AOL came along.

          I was also an AI researcher in the early 1980s, and automated programming was a hot topic. The approach was based on the system actually understanding how the program worked and trying to model what the programmer was thinking. Let's just say that didn't turn out like we thought, either.

      • Re:

        I blame ourselves and thank ourselves every day. I'm beginning to wonder if you're in fact an AI yourself, "hallucinating" a rosy past with such a narrow point of view that you think it was better.

        The internet was... more accurate back then, but not even remotely as useful. Letting the masses in undoubtably changed the entire world for the better, regardless of what you think when you forget to take your meds.

        • Re:

          What's better about the world today because of the Internet? I would say free Information but students are still dropping an easy G on textbooks. Perhaps social media has had positive effects on the mental health of the masses... oops... apparently just the opposite.

          I know being online is better than mindlessly watching the idiot box! It will promote thinking and raise the standard for discou... oh never mind.

          Free international calling? I guess that's mostly true.

          Gaming! Because playing with AIM bottin

          • Re:

            This is the same sort of pearl clutching commentary that was made back in the 1950s when television was rapidly replacing radio in the home. It holds no more relevance now than it did back then.

            The Internet is a tool. As with any tool, how it's used is dependent entirely on the user. Yes, it does get used for things that are of no benefit to society and, in some cases, to the detriment of society. But it also gets used for the good of society as well. That you are unable to see the good side of it does not

      • Re:

        You are projecting human attributes onto ChatGPT that aren't there. ChatGPT is not self-aware enough to be a narcissist.

  • Re:

    Who thought that? I was online via telnet in 1991 and I don't remember anybody drawing your conclusions about the future. I don't recall anybody describing a factual utopia of unassailable truth. Ever.

    I do, however, remember lots of spooked people who didn't much care for the direction this would lead.

  • I'd expect large chunks of unmaintainable code, especially if the AI that created the code goes out of business or no maintenance is needed for several AI generations.

  • I'm not sure why programmers would be surprised at systems degrading over time, that's kind of their domain! Anything that goes mainstream is going to suffer from higher entropy, including actively negative usage that impacts other users.
    • Re:

      Then again, back then photographic proof did actually mean something. Today I can prove that Donald Trump gives blowjobs to Putin with deepfakes that can't even be debunked anymore, so a picture has become totally worthless.

      What is and what is not true has become pretty much meaningless anyway. Everyone just believes what they want and there will be no shortage whatsoever of pictures, texts and videos to prove whatever anyone wants to believe. And even conclusive proof of the opposite is not going to sway p


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK