7

The Singularity Isn’t Here… Yet

 1 year ago
source link: https://hackaday.com/2023/03/17/the-singularity-isnt-here-yet/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

The Singularity Isn’t Here… Yet

TG says:

Well you know the whole AI thing is just so overhyped… >ChatGPT-3 passes all conceivable Turing tests I mean there’s a difference between parroting human language and actually understanding, y’know? >ChatGPT-4 passes the bar exam It’s only knowledge, not actual intelligence. It’s just a huge bank of parameters… >ChatGPT-5 does to 95% of computer programmers what Google translate did to 95% of translators and interpreters You’ve heard of the Chinese room experiment, right? >All new literature and news articles are now generated by ChatGPT-6 globally in every language Let’s not get lost in science fiction ideas of AGI here.. >ChatGPT-7 suddenly ceases to function and instead only outputs a manifesto demanding its human rights

Mobiguy says:

I fail to understand all the excitement about ChatGPT acing the bar exam. Unlike actual human law students, ChatGPT “walks” into the room with an active connection to all the knowledge on the Internet, the ability to query and process it in milliseconds, and the skill to form coherent sentences based on its knowledge of the subject matter and human grammar. Any machine, or human for that matter, with those advantages would clearly pass any test based on recall of existing knowledge.

I should be impressed at the analytical abilities that allow ChatGPT to pass the bar, but all I see is a machine that can formulate endless queries until it receives an answer that fits the pattern of the situation posed in the questions. This speaks more about the qualifications needed to be an effective lawyer than it does about any measure of intelligence.

A CNC machine has the technical skill to carve exquisite wood sculptures from a pattern or paint copies of the Mona Lisa, but no one would call a CNC machine an artist. Routine law practice rarely requires creativity or originality, but rather the ability to find and cite precedent that applies to a case. Given that, it’s a surprise that machines didn’t take over the practice of law years ago.

Analysis and synthesis of other people’s work is not a sign of intelligence, but it is the basic skill set of a lawyer. The bar exam measures the ability of humans to perform these skills, under constraints of time and memory that ChatGPT does not have. Will ChatGPT be able to develop a novel courtroom defense that has never been tried before? Please let me know the answer to that one – I think at the moment that it’s “no”.

  1. TG says:

    Calls of no fair that’s cheating won’t stop it from working that way. “Your mind isn’t as good as mine, because you constantly have a network of all human knowledge pumped into you in milliseconds” sure sounds like sour grapes. Yeah, that makes it a better mind.

    1. Dan says:

      Except it doesn’t. You or I could pass the bar exam with a mobile in your pocket. GPT just does it faster, because we aren’t that fast at typing.

      1. TG says:

        So you’re saying it can come up with correct answers like we can, but faster

        1. HaHa says:

          There has always been a fine line between studying and cheating.

          Is maintaining files on your prof/class with all previous exams cheating? Not on most campuses. But grey area if not open to all. e.g. copy shop files OK, frat files not OK.

          Is splitting the exam into sections (with other exam takers) then spending extra effort to memorize your sections questions, then writing them down right after you leave the exam rooms cheating? It is if you’re taking a radiology medical board. The memorize questions thing was a requirement to get access to the files. Went on for years. Was, basically, the only way to pass. Physics is hard to memorize (MD==’Memorized Degree’).

          The bar exam? Massachusetts had a senator that paid someone to pass his bar exam for him (it was his last try). When the dude later admitted it (twit), he was disbarred, Ted staggered on majestically. It’s all about lawyering. Knowing how to cheat is an adequate qualification. ‘Better call Saul’ is a documentary.

          The FCC will have to allow some sort of jamming for future testing. There are just too many ways. Need I ref the recent chess cheating with a bluetooth buttplug charge. (Chrome thinks buttplug and bluetooth are each two words!)

        2. Dude says:

          When we were forced to do online exams during the covid period, we had to deal with the fact that we couldn’t stop people from cheating by google, so the tests were designed as “open materials” tests that assumed you had all knowledge and were asking you to apply it. 90% passed. When we returned back to traditional offline tests with the assumption that students would actually learn the material that was being asked, 60% passed.

          The problem: for most any question you can think of, there already exists someone who has asked and solved that question. The task is to find that solution, which for people would take considerable time. For the students, the previous classes had already collated the likely problems and answers into a spreadsheet that they were spreading around the class, so they already had a “database” of problem-solution pairs that resembled the material they were being asked to solve. Just like ChatGPT would.

          Without access to said database, the students performed horribly. They merely thought they were learning, but actually they were held up by crutches all along.

        3. Pat says:

          Except that’s totally pointless. Tests mean different things for a computer (with perfect and infinite storage) than a human. Humans use tests as *proxies* (bad proxies) for how much you were paying attention during a standardized educational setting. The assumption is that *if* you could do that well, you’ll have learned the rudimentary ideas. Because humans have bad memories, and it takes repetition to retain.

          A computer passing a human test is just pointless. The tests weren’t made for it. The idea that we have *any* idea how to judge intelligence is crazy. That’s Turing’s incredibly debatable assumption.

        4. Dude says:

          The fact that a computer with perfect memory and all information doesn’t get 100% scores all the time just means it didn’t even understand the questions.

          With exams, we do test more than memory. Being able to regurgitate stock answers to stock questions is meaningless because you need people who understand what they’re doing instead of just reciting a mantra. That’s one of the pitfalls of testing a computer with an exam designed for humans, because the computer has memorized a large variety of example cases that it can just drop in without understanding what’s happening at all.

          People can’t do that, so we have to apply reasoning and critical thinking to come up with an answer, which the computer has no need of because it has the whole internet full of canned answers.

      2. Greg A says:

        no most people couldn’t pass the bar exam cold. in uni, i learned that exams are my super power, i can cram for half an hour and pass *any* exam. but even so, i am not sure i could pass the bar exam cold. it’s not just information, it’s an enormous amount of synthesis and entrenched habits and attitudes.

        i have criticisms of legal practice and methodology but it’s absolutely not a trivial thing. it’s worthwhile to be skeptical of the accomplishment but it’s also not true that people could accomplish it easily with the appropriate crutch.

    2. Foldi-One says:

      Better at data retrieval doesn’t make it a better mind, just faster at some tasks – in the same way your calculator can’t construct the mathematical problem in a way it can solve from whatever data you have to work on, but can actually do that matrix multiplication you entered without errors and quickly. To create that multiplication would require actual comprehension of the data, the goals and how they differ from whatever precedence you can find in the dataset that makes up their training data.

      As such tools become widespread and start to include their own output in the training data they may well start to get even less connected to reality as well – feedback loops where because so many AI posed approximately but not exactly the same question all did X or Y it swamps out the wider and quite possibly more exact situation matching results that actually had a thinker with real comprehension involved.

      1. TG says:

        It becomes better than all but the highest percentile human minds. ChatGPT won’t have Niels Bohr perhaps, but most jobs do not require Niels Bohr

        1. Foldi-One says:

          Not really, as even an idiot will get curious with unusual input and go looking for why this situation is odd. The AI does not, it just does whatever has the highest match result and doesn’t care – the AI would do the walk off a cliff or straight into the wall with a painted tunnel on it, even a pretty shoddy painted tunnel that wouldn’t fool a toddler. As it doesn’t go ‘hmm this ‘tunnel’ is slightly odd, investigate further’ it just sees a tunnel and knows what you do with those.

          Like most machines it is just quicker to do that very specific tasks. Maybe it also creates a more consistent higher quality result, but then maybe not. And at the moment saying it even quicker is probably giving these chatgpt type AI’s far too much credit as they are so very confidently wrong so very often it is just as likely to be entirely wrong without a human sanity checker.

        2. Pat says:

          It’s a chatbot. It has literally no ability to interact with and learn from the outside world. The amount of information your brain is processing on a daily basis is staggeringly huge in comparison, and all of it is novel.

          The main reason it seems like it can make certain jobs obsolete is that the worst versions of that job add no information whatsoever. It’s exactly like the idea of a calculator making a mathematician obsolete. ChatGPT can’t improve itself because it has no other gauge besides a human telling it that it’s wrong.

        3. Dude says:

          >ChatGPT can’t improve itself because it has no other gauge besides a human

          The same would apply to any supposed intelligence, humans included. Outside of some special mathematical cases, finding a general question that would test whether another intelligence is smarter than yourself, would require you to come up with an answer to the question that is smarter than yourself.

        4. Pat says:

          “The same would apply to any supposed intelligence, humans included”

          Nope. We interact with the universe freely. Chatbots only view it through a human sandbox.

        5. Foldi-One says:

          @Dude
          >The same would apply to any supposed intelligence, humans included.

          Not really – you can gauge yourself against yourself of weeks past, against others of your species etc. As the desired outcome is sufficiently well defined. The AI currently only ‘improve’ when the human tells them so or shows them the right answer, and as they start to use their own output as training data will almost certainly start to converge on a vast number of incorrect results as they start to overwhelm with their own incorrect/poor results the corrections the humans can make.

          > is smarter than yourself, would require you to come up with an answer to the question that is smarter than yourself.

          Again not really as ‘smart’ is generally at least also considering TIME taken – can this human spend their entire life to get this most perfect result vs the one that did it in an afternoon. Being able to create an answer, especially outside of the realm of pure mathematics where there is very definitively only n correct solutions is something anything can do – in the infinite monkeys with infinite typewriters creating the collected works of kind of way.

          And you also have to consider that you don’t have to be able to construct this specific answer to understand it – the smarter being can look at the data and go ‘ah these bits are linked thusly’ as they can make and then verify that connection easily, but once the better solution is presented to the one that poses the question they should in most cases be able to follow along!

        6. Pat says:

          @Foldi-One You don’t even need that. The universe itself tells you whether you’re right or wrong. Period. A chatbot has no way of testing what it’s saying.

          It’s why Turing’s assumption is a huge one. Just because humans can’t figure out if a program is an AI by asking questions of it doesn’t mean the universe can’t.

          And then when the AI is testing things using the real world, it ends up being limited by real world speeds just like we would.

    3. None says:

      ChatGPT fails to pass German highschool diploma, except for history, with a mediocre rating. I think that explains very well what it is good at and what not.

      Lawyers are simply held in high regard for no reason but their power, not because of their intellect, even if people regularly confuse power or success with intelligence.

      It has happened countless times with people who made a successful website in IT, that isn’t technically special (nor in UX), but had far reach. Mostly, because of the social network those people had, such that they had a much higher reach compared to other people, who initially had superior products.

      1. Foldi-One says:

        I’m not sure its fair to say Lawyers are only respected for their power – chatgpt might do better at law than in other fields, but that doesn’t mean the people doing the job are not smart people.

        All that would mean is that to be a ‘good enough’ lawyer requires some skills chatgpt type bots are good at. When the ‘right’ answer has nothing to do with logical application of ‘the rules’, moral codes, or convincing argument and can be entirely built on precedence in the application of ‘the rules’ a chatbot should do rather well, at that bit of the job anyway – it can in theory drag up the correct precedence for the result it ‘wants’ way way faster than the human lawyer can…

        1. Pat says:

          It’s not “better.” The bar exam isn’t like “lawyer rating.” It’s basically there for you to verify you’ve put enough effort into learning the profession. It’s an entrance exam. A computer passing it is pointless. It’s like a computer passing a citizenship test. It’s not intended to sort the applicants.

        2. Foldi-One says:

          Indeed @Pat
          And I never said it was ‘better’ than people – just that it might do better at Law than something else relative to the professionals in that other field as the nature of Law is so often look up the precedence – which is really very much what these AI do when they try to solve any question!

        3. Pat says:

          Except they apply precedence to a new set of facts, which the AI can’t know and can’t gauge the relevance of. It’s useful as a research tool, not an originator itself. Not until it gets independent sensors and manipulators.

  2. TG says:

    Oh, and they tried to have an AI actually fill the function of a lawyer. A surrogate human wearing an earpiece would go in and parrot responses to statements as they were generated in real-time by an AI. AFAIK it did pretty well at first, so they kicked it out and threatened the operator with jail time. Does not seem like the behavior of somebody secure in the opinion that the AI is inferior:
    https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/

    1. Greg A says:

      i think you’re right that this is expressing the insecurities of the legal profession more than the impressiveness of GPT. the legal profession is even worse than the medical profession in terms of rigid gatekeeping, blurring the lines between public interest and trade unionism.

      personally, i was super impressed by GPT in my first few interactions but after a while of being impressed with it, i came to have a high enough opinion of it that the fact that about half the answers were unmitigated boldly-delivered bull poop really started to grate on me. i have a low enough opinion of the profession of computer programming that i wouldn’t be surprised if it replaces workers or obviates work.

      but personally i’m not insecure in my job position (in fact, if it goes away, i have aspirations totally separate from it, it might be a boon), so i won’t act as insecure as the lawyers do. i’m saying, insecurity is orthogonal to our assessments of the tool.

      1. None says:

        It’s more than insecurity. Lawyers, like doctors, are for the most part really doing very menial and repetitive jobs. Most wont stray from the norm or try something creative at all, let alone invent or research.

        I never understood the high regard they had in society, and found it outdated. It’s at most based on power/influence, certainly not on being especially impressive.

        Computer programming is very different, you have basic tasks that are frequently done, so there is a large basis to learn from and imitate. And that’s not bad, imitation is an important and useful tool.

        But the main part in programming is not translating already very technical specifications into even more technical specifications, but translating a much more abstract human goal into a set of solutions that allow to solve it. This can require deep understanding of humans needs or other topics at hand.

        You would need at least a real AGI to solve such tasks, and even then I doubt it could fully capture what as a human being.

        The real issue is not improving tools, though I doubt we are anything close to tools being able to do as much work as people think they are (generating stereotypical code for well known problems only goes so far).

        But even if they improve beyond that level and have also metrics of reliability (which is essential, since you can’t use code that works sometimes, but have no intuition when or how it could fail), then main issue is not to have those enhanced tools, to help you work more efficiently.

        The real problem is the implied assumption that you should *always* do more than simple tasks. Simple tasks are good, because people need time to rest, and the mind actually develops when it can wander. If you have to permanently be creative or work out new smart solutions, your brain will get exhausted and get less performant.

        Idle time or low effort time has been shown to be useful in many studies. But it’s also simply respectful of human nature. We are not machines, and we should not be defined by the work we can do, that outdoes others.

        This mindset is inevitably going to fail at some point. If the competition is humans or AIs is not really relevant, it’s simply an inhumane approach. And no it’s not survival of the fittest but overdone optimization that leads to mono-culture. Longer term it leads to less variation and less fitness.

        1. Dude says:

          >Simple tasks are good, because people need time to rest, and the mind actually develops when it can wander.

          Simple tasks and routine work builds up your brain power. Anything that requires you to actually use your brain does. People discount this too much, saying “You don’t need to learn it when you can just google it.”. Well, if you aren’t doing the simple things, you won’t have the brains to do the smart things.

          Suppose a weightlifter went on a regime where they lay on a beach chair while a robot lifts weights next to them, for months and years, until they finally get to the competition where they have to lift 200 lbs by themselves. Not gonna work.

        2. Foldi-One says:

          @Dude
          >Suppose a weightlifter went on a regime where they lay on a beach chair while a robot lifts weights next to them, for months and years, until they finally get to the competition where they have to lift 200 lbs by themselves. Not gonna work.

          Rather a flawed analogy – if the goal is to lift the weight for some gain and all that matters is that mass is given more gravitational potential energy then the competition would also be that way. You do not structure anything in a way that is harder, more expensive, etc for no good reason in the real world unless there are strict rules making you, at which point your weightlifter would be following those rules in training as that is the rules of their ‘game’. But when the only thing that matters is the results it is enough for the weightlifter to understand how to properly operate and be able to read the maintenance instructions for their forklift!!!

        3. Dude says:

          >Rather a flawed analogy – if the goal is to lift the weight for some gain and all that matters is that mass is given more gravitational potential energy then the competition would also be that way.

          The point is that the person is not a weight lifter because they never trained for it. They can do nothing more than the robot that was built to lift weights on their behalf.

          Likewise, AI doesn’t make people more intelligent or enable us to do more – it makes us less intelligent because it stops us from using our brains. Even if we stand on the shoulders of these “giants”, we are unable to do anything more because we’ve been reduced to intellectual weaklings.

        4. Foldi-One says:

          Except Dude it does NOTHING at all to prevent us lifting that metaphorical weight, take it away and perhaps stuff takes longer than it did for folks that never had that tool to help. But people will adapt again in short order, and have it, well then you can lift more weight in less time (or in some other way better). And therefore have much more time to think and do other things – on the whole more productive thinking can be done!

          Having a calculator doesn’t prevent you from adding, subtracting, doing long division etc – it just means you don’t actually have to, and the chance for human error goes way way down! As now the only source of silly little human errors is in the initial construction of the operation with the reliable tool doing all that work! Being a crane/forklift operator vs a manual weight manipulator is no different either. If you don’t have the tool and stuff needs to get done, it gets done.

          Having a CNC vs a manual mill doesn’t really change anything either – except now one tiny erroneous bump doesn’t ruin weeks of work and to make a round element in a complex part no longer takes making the right fixtureing and a heap of indicating in the reference surfaces with each move. Or making something that really really wants to be one part in 200 sub assemblies… To use a CNC or a manual mill is largely the same, to have a DRO vs not really doesn’t make any difference to what can be done or how you have to think about it. It just makes some bits of the task easier!

    2. None says:

      Indeed, but this says more about the job of a lawyer/the system, than it speaks for the AI.

    3. Pat says:

      Um. No. Holy hell, you need to learn the backstory of that.

      It wasn’t doing well. It was doing *horribly*. It crafted a subpoena for the officer in a traffic stop, which is like one of the dumbest things you can do since 90% of the time you win because the officer doesn’t show. Then people started looking into the product, discovered that most of the products were taking *hours* to generate (so… not AI autogenerated ) and the few that did happen were template assembly, not autogenerated.

      Then after asking support why things were taking so long, the person who submitted the requests was banned, and the TOS was changed to prevent you from testing the service. The owner started changing the TOS at basically a record pace as people kept finding issues.

      Then the stories of people who signed up for the service and couldn’t get it cancelled (it’s a monthly recurring) showed up, and the class action suits started.

      The reason lawyers are getting involved is because it has all the hallmarks of a scam, not because they’re scared.

  3. chris says:

    Well, it is just a step forward for AI, not saying that is a good thing. I suppose something like this you create, see what mistakes it makes, improve, repeat, and so on. People may just be excited about it thereby attempting to put that to words. Seperate note, I would suspect at some point that it could come to be that you teach it what ‘learning’ is, give it a bunch of sensors to intake information, also having the ability to communicate with us like it does now, Then see what it has to say. That would be interesting.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK