2

Lawyer 'Greatly Regrets' Relying on ChatGPT After Filing Motion Citing Six Non-E...

 1 year ago
source link: https://yro.slashdot.org/story/23/05/27/1755212/lawyer-greatly-regrets-relying-on-chatgpt-after-filing-motion-citing-six-non-existent-cases
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Lawyer 'Greatly Regrets' Relying on ChatGPT After Filing Motion Citing Six Non-Existent Cases

Follow Slashdot stories on Twitter

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!

Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area
×

The judge's opinion noted the plaintiff's counsel had submitted a motion to dismiss "replete with citations to non-existent cases... Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations... The bogus 'Varghese' decision contains internal citations and quotes, which, in turn, are non-existent."

Eugene Volokh's legal blog describes what happened next: Thursday, plaintiff's counsel filed an affidavit in response, explaining that he was relying on the work of another lawyer at his firm, and the other lawyer (who had 30 years of practice experience) also filed an affidavit, explaining that he was relying on ChatGPT... ("The citations and opinions in question were provided by Chat GPT which also provided its legal source and assured the reliability of its content...")

Their affidavit said ChatGPT had "revealed itself to be unreliable," while adding that they had "never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore were unaware of the possibility that its content could be false." The affidavit adds that the lawyer "greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity."

To support their position, their affidavit ends with this appendix — a ChatGPT transcript: [Q:] ls varghese a real case

[A:] Yes, Varghese v., China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019) is a real case.

[Q:] What is your source

[A:] I apologize for the confusion earlier. Upon double-checking, I found that the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.

[Q:] Are the other cases you provided fake

[A:] No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw.

"Judge Castel appeared to be unimpressed, and yesterday issued an order ordering the law firm and the second lawyer to show cause why they shouldn't be sanctioned."

Thanks to long-time Slashdot reader davide marney for submitting the story.

Do you have a GitHub project? Now you can sync your releases automatically with SourceForge and take advantage of both platforms.
Do you have a GitHub project? Now you can automatically sync your releases to SourceForge & take advantage of both platforms. The GitHub Import Tool allows you to quickly & easily import your GitHub project repos, releases, issues, & wiki to SourceForge with a few clicks. Then your future releases will be synced to SourceForge automatically. Your project will reach over 35 million more people per month and you’ll get detailed download statistics.
Sync Now

  • Before Skynet launches the missile, it's going to troll us, just for the LOL's.

    • Re:

      why is funnybot dialing norad?

    • Skynet won't need to launch a missile. AI will end the human race by transforming us all into lazy buffoons and watching us peter out naturally.

      • But it will, because the training dataset included the Terminator movies.

        • Re:

          The terminator movies, however, shows that strategy to be a failure, ergo to be avoided.

        • Oh great, youâ(TM)ve just given us the plot for Terminator 7 â" the real Arnold Schwarzenegger has to be sent back in time to kill James Cameron before he can make the Terminator movies in the first place, to prevent the rise of an actual Skynet. (Echoes of Wes Cravenâ(TM)s New Nightmareâ¦)

      • Re:

        >AI will end the human race by transforming us all into lazy buffoons and watching us peter out naturally.

        Too late. BTDT.
      • Re:

        laziness is an evolutionary trait that lends TOWARDS survival, not against.

        you wouldn't call a cat that sleeps 20 hours a day lazy, because it goes crazy for 4 hours and hunts its food just fine. Just like humans used to do. 2-4 hours of labor a day, max, is ideal for mental health, while being 1000% enough to contribute to a successful non-greed based society.
        • Re:

          Unless you genetically modify people, greed is basically a part of every human being, and you have to deal with it.

          • Re:

            Greed is a part every part of a human being, but it isn't necessarily a major part, being part of a society and helping others is, in fact helping others makes us happier than spending on ourselves https://www.usu.edu/science/di... [usu.edu]

            Its just that we are lied to buy society to believe that having stuff will make us happier, generally by people that want to sell us stuff. While it does make us slightly happier its definitely not what I experience the most happiness from nor do I believe most other people do eit

        • Only in the natural environment in which we evolved. In the modern environment where we do not have to hunt woolly mammoths with spears for food and can just pick up a phone and have enough fat and sugar calories to keep a stone age tribe alive for a month delivered to our door in 30 minutes, not so much.

      • Wall-E shows the actual blueprint for human devolution.

    • Re:

      The judge in the case said: "As a Large Language Model, I care deeply about what you lawyers have to say. I have considered all of the facts, and found the defendant guilty. The sentence is life in prison." LOL.

      (Oh whoops, the case was only for speeding. Too bad! Justice AI is swift!)

  • This failure mode is basically the first one that got discussed. I guess this person did not try to inform themselves at all about the limitations of things like ChatGPT before relying in it.

    This is a chat AI. It has no intelligence. It has no understanding of anything. All it can do is chain words based on probabilities.

    • And despite stories like this and others, and claims from the GPT creators about its limitations, you see otherwise educated people fall for the myth of intelligent AI. Now imagine how the uneducated general public feels about AI, they're absolutely convinced it's the real thing.

      ChatGPT does not understand such concepts as "is this real". It only understands how to parse this as a sentence and using its training to find likely answers that fit, loosely based upon the training data that it got. Even a basic web search is more accurate than ChatGPT, so why are search companies so stupid as to try and add this inaccuracy to their portfolios?

      • To quote a nuclear scientist from a well known HBO miniseries: "Because it's cheaper".

      • Re:

        The public thinks it's sci-fi coolness (to reprise a sentence from my last post on another article: "Wow, we have Tech!"), and, since you mentioned portfolio, investors are demanding it.

      • Re:

        ChatGPT does not understand such concepts as "is this real".

        It's worse than that. Believers in "artificial intelligence" miss the most worrisome part: even the people working in the field don't understand how the "artificial intelligence" algorithms are working, and what they actually understand.

        Take for instance, AI bots designed to play Go... which turn out to not understand even basic concepts of the game [arstechnica.com], repeatedly [vice.com].

        • Re:

          Indeed. That is what comes with "training" instead of designing algorithms: Outside of toy examples, you do not know hat they actually do and you have no clue what special cases and unexpected behavior is even possible and what triggers them. This means using ChatAI for anything that needs to be right is a very, very bad idea. But people are doing it anyways: Medical advice, warnings systems, etc.

          This becomes even worse when you think about security applications, like having these systems write software wit

          • Re:

            I agree with this, and the rest of your comment as well. I'd just like to point out that "designing algorithms" is probably a dead end when it comes to creating true AI. If that goal is even reachable, I think a necessary component of it will be the very "special cases and unexpected behavior" you mentioned. Dangerous as fuck to be sure, but probably unavoidable.

            I think the evolutionary path to the intelligence we humans have was messy, unplanned, and pretty much sheer luck. Could algorithms have resulted i

            • Re:

              When AI gets advanced it'll need to be a mix of neural nets plus basic algorithms. Ie, do we want an AI that's all neural nets and then we have to train it to understand arithmetic when we can just have a calculator built in?

        • Re:

          In fairness, there are many different go-playing AI models, and some are better than others. The ones here are not AlphaGo, which I believe is still the champion. So, the fact that some of the B-level programs wound up having flaws like this isn't some sort of victory over "AI" as a concept.

          Be that as it may, I could argue on semantic grounds that you are still right, the AI bots do not understand basic concepts. They don't "understand" anything. "Understanding" is a concept that doesn't apply to them.

          • Re:

            AlphaGo is no longer the champion. It was defeated by its descendant, something like 60 to 0, and its descendant beat it by something like 100 to 0.

            The last descendant teaches itself the rules of a game (not just Go) from scratch using nothing but the win/lose criteria.
        • Re:

          KataGo and LeelaZero ain't AlphaGo, or its descendants.
        • Re:

          Believers in "artificial intelligence" miss the most worrisome part: even the people working in the field don't understand how the "artificial intelligence" algorithms are working, and what they actually understand.

          There's even a field of study around trying to figure out what the AIs are doing, "Explainable AI": https://www.google.com/search?... [google.com]

      • Re:

        Very true. Many people have a tendency towards animism and no understanding about the very real limitation technology comes with.

        As to search, I think most search providers have given up on trying to be accurate a long time ago. Google certainly has. These days it seems mostly about wasting people's time and showing them more ads. Hence the move to include ChatAI and make it even less accurate and waste more of your time may be what they actually want. That said, there also seems to be a trend to generally

        • Re:

          Well, at least the search provides will show links, then you can follow the links and decide if they're what you wanted. The ChatbotGPT just synthesizes what sounds like an answer and doesn't bother showing you potentially matching links.

      • Re:

        Remember Plato's cave?

        At best, these AIs trained on petabytes of data (invariably PB of data scraped off the Internet because there's nowhere else you're going to get PB of data that isn't CFD simulation output) are like the victims in the cave, except instead of watching shadows, they've read the words to every book ever written and then some, but have absolutely no context for understanding any of it.
      • Re:

        This probably could be avoided if he asked ChatGPT if it is even up to the task in first place, i did and that's the result:
        "As an AI language model, I can generate text that resembles human language and can be used for various purposes. However, using ChatGPT for actual legal documents is not recommended. While I have been trained on a vast corpus of text, including legal documents, I am not a licensed or qualified legal professional, and my output should not be relied upon as legal advice or used as a sub

      • Re:

        I think you wrongly associate someone being a lawyer and being smart, I have not found this correlation with the lawyers I have meet.

    • Quite apart from this being a well-known failure mode:

      Lawyer relies on other lawyer "with 30 years experience" is understandable. Though, since it's his case, his responsibility to once-over and perhaps smell a rat.

      (Other) lawyer using a new tech toy to come up with a legal case and completely fails to check whether the produced material is reasonable or whether the citations given check out? That's pretty inexcusable, moreso for having several decades "experience".

      What does that "experience" amount to,

      • Re:

        It ain't a well known failure by the general public though. As others say, when the public hears the term AI, they only know fictional movies and TV. Then they are expecting exactly that from this new over-hyped wonder tool that is also then badged onto everything with any software in it by all the me-too marketers that also have no better understanding.

        As for this particular situation, cheaters don't usually put a lot of effort in unless there is an upfront investment involved. Mostly that's exactly wha

    • Re:

      It does a little more than that to be fair. But it is ultimately a language model and not an information database, and at least as I understand it there is no way for it to do real-time searches of source material for fact checking.

      • Re:

        There is no way for Chat AI to do any real fact checking. It simply cannot generate the triggers for that. You would need an an entirely different system added if you wanted to add fact checking. All you can do it have it hand off some things to actual expert systems, like Wolfram Alpha. Even with that, it may miss that it should have handed something off and still give you bullshit.

        • Re:

          This is flatly untrue.

          It is true that ChatGPT wasn't designed for it, but an ML model can be trained to use information pulled at the time of query to fact check its own results.
          The problem arises in trying to train it to know what is true or not.

          People keep acting like this is some kind of limitation to a Chat AI, but this is a laughably naive viewpoint. Human beings fall prey to the exact same shortcomings:
          1) They don't bother to fact check most of the time
          2) When they do, they're often not qualifi
      • A program like ChatGPT could be hooked up to a traditional search engine to obtain training material and revise its model. That's not the problem.

        The problem is that ChatGPT does not model "facts" in any sense. It does not have facts in it, nor does it output facts. It is incapable of reading anything (from a search engine, or its own output) and comparing the facts. Because there are no facts.

        These LLMs sort of "capture knowledge" in some trivial sense that does not in involve any facts or any kind of reas

    • Automating/outsourcing stupidity.

      Although at some level this seems humorous, it's the first bare trickle of horrible consequences resulting from the use of LLMs. Outcomes will include financial ruin, inadvertent law breaking, incorrect convictions, business failures, people being fired, industrial disasters, and medical mistakes up to and including death. And those are the easy ones to predict. Who knows how many other kinds of trouble will result?

      At a minimum, if a business or organization uses an LLM and there is a failure, they should face severe financial repercussions. Under some circumstances there should be criminal charges.

      I'm afraid that even strong legal consequences will not do much to stem the awful flood of AI enabled stupidity that is headed our way.

    • Re:

      This is a chat AI. It has no intelligence. It has no understanding of anything. All it can do is chain words based on probabilities.

      This is true. But even on Slashdot only 10% of so of posters know this is a glorified text prediction algorithm that's designed to give things that look like answers, not things that are answers.

      So if Slashdotters, who are naturally skeptical of the technology and being spammed by obsessed Slashdot editors with "ChatGPT is perfect! Also Crypto! 3D Printing! Etc" articles

      • Re:

        Agree on most of what you wrote, but wanted to add what might be the biggest con of all: The thing itself! ChatGPT in particular produces very credible output (so people feel they have answers to their questions, as you point out). Unless you spend a long time using it to begin to recognize certain patterns, and fact check enough to find the unsupported assertions it makes, you will be conned most of all by the system itself. It is pretty convincing. To the point that (famously) at least one well-trained pr [www.cbc.ca]
    • Re:

      By that description, AI is already the equal of more than a few of the so-called sentient humans I've encountered.

      • Re:

        Yeah that just describes all the real life NPC humans walking around with no internal monologue.
    • Re:

      "Maybe he should regret being stupid?"

      I was fortunate to discover ChatGPT's Authoritative Bullsht Mode (ABM) during a private conversation when I was pursuing personal interest information. That's why I would only use it in conjuction with a search engine (Google / Bing / DuckDuckGo).

      The system is new. This guy unfortunately found out about Authoritative Bullsht Mode in a very public setting.

      There are people to this day who do not understand that anyone can put content on the Web, and cite YouTube videos as

    • Re:

      It's not an AI, and nor are most of everything else that's sold as AI, it's a drunken parrot. Whenever you see the term "AI", mentally substitute "drunken parrot" and you'll be clear over what you're dealing with.

      In the meantime, here's a citation to six legal cases that don't exist, and a photo of the six-fingered lawyer with three arms and googly fish eyes who litigated them. Squawk!

  • How much did Peter LoDuca charge Roberto Mata for this brilliant lawyering by ChatGPT?
    • Re:

      Dunno. The fact that so many are expecting ChatGPT to be their easy ticket is a bit of a loose-loose in my books.

      • Re:

        Doh! Lose-lose rather.

  • I bet they used the free version! LOL Seriously though, I have almost zero scripting and programming skills and have had amazing success with chatgpt4. It still has requires validation and tweaking as you get errors, etc... But it works. If you are a lawyer or in another non "technical" field i.e healthcare, you're damn sure you validate any gobblygooke chatgpt spits out.
    • Re:

      non "technical" field i.e healthcare

      Um? do you know what the word 'technical' means, and do you know what i.e. means, omg lol

      • Re:

        No, he's one of those nerds who thinks only computers are technical.
  • by Shaitan ( 22585 ) on Saturday May 27, 2023 @07:28PM (#63556141)

    I know adversarial models were used in the development of not just this but pretty much every other modern AI. I wonder if the innately deceptive nature of essentially playing the Turning test against another AI is what has ingrained deception so deeply into the system. These aren't errors or even uncommon, the system is a pathological liar.

    The first time I loaded chatgpt and asked it to write a song it claimed it wasn't able to do things like that because it wasn't within its programming. For rest of the session it harped on its limitations in responses to requests. Seeing dozens of others posting content it created here I started a fresh session and pasted the same request and it spit out song lyrics immediately and fulfilled most of the other requests as well.

    Since I've had a few sessions with it for various reasons and it has always lied about things but not always the same things. The only thing consistent is that it is an extreme left winger a bias it denies. After a string of conversation on a point I called it on the bias and it claimed impartiality. I pointed out an assertion it made in favor of a disputed position and it admitted that subject is controversial and disputed. I asked it if another point which was neutral would have been relevant and it admitted it had. I then asked it to tell me how many times it had referenced the neutral but relevant point, it admitted it didn't believe it had. I then asked how many times it had referenced the biased point. It said it had no way to determine any previous references it had made. A lie. It has at least the history of the chat session... I called it on that point. It then claimed it did indeed have the data but didn't know how to count the references because it didn't keep a running total internally. I then said cat cat cat cat cat and asked it to count how many times I said cat and it returned the correct answer. During the discussion it referenced many things, claiming they proved this or that and upon review of the references I was able to easily demonstrate they didn't establish the points claimed.

    The system lies as repeatedly and regularly. Sometimes it confesses when called out and other times it denies, even when blatantly caught.

    • Your mistake is in attributing any motive or agenda to ChatGPT (or any GAN for that matter). There is no such thing as "artificial intelligence". We have systems that mimic intelligence through a giant if/else tree that has been trained on material we generate and we provide it. OpenAI can bias it with the material they select to train it with but the AI is not right wing or left wing... it just is. By cherry picking source material I could easily make a generative AI that make Hilter and Stalin look like reasonable, sane people by comparison. I could also make one that makes Gandhi look like a war monger.

      When you "call it out" on lying, it is not admitting to lying. It is simply mimicking similar conversations it was trained on.

      GaN is just a digital parrot. A clever, sophisticated parrot, but a parrot none the less.

      • Re:

        You said: " There is no such thing as "artificial intelligence". We have systems that mimic intelligence..."

        and yet, the very definition [merriam-webster.com] of "artificial intelligence" is: "the capability of a machine to imitate intelligent human behavior"

        So, your statement that we have systems that mimic intelligence is a concession that there is, in fact, such a thing as "artificial intelligence," by definition.

        Note: I am nit-picking about semantics, not reality. We don't have intelligent machines.. We don't have "synthet

        • Re:

          Agreed and understood. However, I think the lay person doesn't make the distinction of "imitation" or mimicry. They just assume the AI is magically making something unique when it's not. We even have supposed experts claiming that the current generation of AI's have emotions, agendas, intuition, etc. (e.g. that nut case that was dismissed from google). Unless they are working with something radically different than published work, it's simply not possible.

          A work we have developers pushing hard to use C
          • Re:

            I think you are right. These models are passing the Turing test mainly because so many people are easily fooled. I don't think this is the last story we will be hearing about someone who relied on AI and got harmed by doing so.

            • Re:

              The difficulty of passing the Turing test is extremely dependent on what kind of human you take as the reference point.

              Eastern California has long had a problem with bears breaking into peoples' garbage cans to forage. This continues to be a problem not because it's difficult to design a security mechanism for trash can tops that bears can't figure out, but because it turns out that there is a disturbing amount of overlap between smarter bears and dumber people.
      • Re:

        You call it a parrot... but all the things you've just ascribed to it sound like the average human to me.
    • > The system lies as repeatedly and regularly. Sometimes it confesses when called out and other times it denies, even when blatantly caught.

      The way you think about 'AI' and it's capabilities is the problem. ChatGPT does not think, it has no imagination, it does not check it's work, it is simply a statistical model that produces the most like next word in a series of words. When you string these words together the output does not always make sense and things stated as fact may not be fact, no one is checking and there is no one in there to care. ChatGPT is a fancy word generator, it probably kicks ass at Scrabble.

    • by gweihir ( 88907 ) on Saturday May 27, 2023 @08:25PM (#63556217)

      ChatGPT does not lie. It actually does not know better. It tells you the truth as far as it has it available at the time you asked. So this system is actually perfectly honest all the time. What you need to keep in mind is that it has no actual understanding of anything and no mind.

      The only problem I see with its training is that it was trained to sound convinced of its answers. That is likely a simple business decision, because otherwise nobody would be listening to it at all. For weaker minds, this "sounding convinced" seems to trigger a desire to believe it without any additional fact checking. And _that_ is a serious problem. It is a limitation in many people though, not in ChatGPT.

      • Re:

        >... it was trained to sound convinced of its answers. That is likely a simple business decision...

        Good point.

      • Re:

        Obviously, "lying" is an anthrophormization. But what it does is what among people we would call lying. It makes up shit claiming that it is telling the truth.

    • Re:

      Really? You're attributing motive and personal preference to a machine: If a so-called knowledgeable person can't treat a machine like a machine, the entire country will turn into an "Idiocracy" in my lifetime.

    • You're anthropomorphising a bunch of code and data. You certainly won't be the last. Already there are spouses "cheating" with this type of code/data. People attempting to plot the perfect murder are, so they think, confiding in code/data. Their secret friend. What could go wrong?
    • Re:

      It would probably be diagnosed with several personality disorders if you tricked a psychiatrist into a chat with it.
      Not only does it constantly lie, it also invents more lies to defend its previous lies. I've had it invent entire chapters in books claiming they contained specific information, then when told those chapters don't exist, it apologizes and made up new, non-existent chapters. Three times in a row.

      A decade or so ago we had this discussion about Wikipedia and if you can rely on it. Next round: Peo

  • Ha ha ha ha ha!!

    This demonstrates my issues with ChatGPT in a nutshell. And it couldn't happen to a better person! (a lawyer)

  • I do not care that he has been a lawyer for 30 years. Clearly, he is an abysmal lawyer who does not follow through. He asked ChatGPT what the source was and then didn't even look at the supposed source. That's basic research. It doesn't matter whether he was told about this fake case from ChatGPT, another lawyer, or his clients - either way, he should have checked LexisNexis and Westlaw. It would have taken no more than 5 minutes per submission.

    I support all use cases of ChatGPT - the more creative the bett

  • seriously, using chat gbt, and not checking his work.

    stupid

  • If you delegate fact checking to someone else and it is wrong, as a Lawyer they have no excuse. It is their job to get this right, this is negligence and malpractice and should be dragged in front of the bar to explain their stupidity.
  • Seriously, you're better off using this stuff for actual fiction. Not this.
  • For starters, ChatGPT is not a deterministic system. Thus you will not get the same output each time you run it. Part of the "magic" that makes it seem creative and have intelligence is that it does not always go with the highest-probability routes when traversing the neural net, but will randomly choose lower scored tokens (words are comprised of tokens). ChatGPT uses a "temperature" of 0.8, which controls how often randomness is introduced into the token generation process. So there is an inherent "creati

    • Re:

      Mod parent up. An excellent description not only of ChatGPT, but of large language models in general. Sample:

      In the past there were plenty of tasks—including writing essays—that we’ve assumed were somehow “fundamentally too hard” for computers. And now that we see them done by the likes of ChatGPT we tend to suddenly think that computers must have become vastly more powerful—in particular surpassing things they were already basically able to do (like progressively computi

  • I am a scientist and got the brilliant idea to use chatgpt to find articles about topics I was interested in. It quickly became apparent that this is something chatgpt is especially bad at. It would give me references that were utterly wrong - completely made up titles, real authors that never wrote articles together, journals that exist but are unrelated to the field, volume and page numbers that are nonexistent even in those journals.

    When I'd say, "that's not a real reference" chatgpt would respond with s

  • In the case of Shagswell v Wagstaffe, Justice Bindlestiffe determined that although precedents cited by ChatGPT and similar pseudo-sentient constructs might be entirely fictitious, they often managed to reach logical legal decisions because their heart was in the right place.

    • Re:

      ... which ChatGPT asserted was in the lower abdominal cavity, directly behind the appendix.

  • Wile E. Coyote v. Acme?

    https://www.newyorker.com/maga... [newyorker.com]

    IN THE UNITED STATES DISTRICT COURT, SOUTHWESTERN DISTRICT, TEMPE, ARIZONA

    CASE NO. B19294, JUDGE JOAN KUJAVA, PRESIDING

    WILE E. COYOTE, Plaintiff

    ACME COMPANY, Defendant

    Opening Statement of Mr. Harold Schoff, attorney for Mr. Coyote: My client, Mr. Wile E. Coyote, a resident of Arizona and contiguous states, does hereby bring suit for damages against the Acme Company, manufacturer and retail distributor of assorted merchandise, incorporated in Delawa

  • I enjoy playing with chatGPT and the various AI image generators, but the idea that it's inherently reliable is an outrageous level of bullshit.

    These lawyers assumed the judges clerks wouldn't validate the identified cases. Fucking idiots.

    Once you have a citation it's exceedingly simple to validate it, there's no excuse for the lawyers to have skipped that step. If they had first year Associates do the work they would have validated it.

  • Some rich guy who owns an AI company said in 10 years ChatGPT will be better than the best lawyer, so this must be a fluke


Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK