5

'OK, So ChatGPT Just Debugged My Code. For Real' - Slashdot

 11 months ago
source link: https://developers.slashdot.org/story/23/10/15/185245/ok-so-chatgpt-just-debugged-my-code-for-real
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

'OK, So ChatGPT Just Debugged My Code. For Real'binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Sign up for the Slashdot newsletter! OR check out the new Slashdot job board to browse remote jobs or jobs in your area

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!
×

'OK, So ChatGPT Just Debugged My Code. For Real' (zdnet.com) 100

Posted by EditorDavid

on Sunday October 15, 2023 @02:08PM from the shape-of-things-to-come dept.

ZDNet's senior contributing editor also maintains software, and recently tested ChatGPT on two fixes for bugs reported by users, and a new piece of code to add a new feature, It's a "real-world" coding test, "about pulling another customer support ticket off the stack and working through what made the user's experience go south." First...

please rewrite the following code to change it from allowing only integers to allowing dollars and cents (in other words, a decimal point and up to two digits after the decimal point).

ChatGPT responded by explaining a two-step fix, posting the modified code, and then explaining the changes. "I dropped ChatGPT's code into my function, and it worked. Instead of about two-to-four hours of hair-pulling, it took about five minutes to come up with the prompt and get an answer from ChatGPT." Next up was reformatting an array. I like doing array code, but it's also tedious. So, I once again tried ChatGPT. This time the result was a total failure. By the time I was done, I probably fed it 10 different prompts. Some responses looked promising, but when I tried to run the code, it errored out. Some code crashed; some code generated error codes. And some code ran, but didn't do what I wanted. After about an hour, I gave up and went back to my normal technique of digging through GitHub and StackExchange to see if there were any examples of what I was trying to do, and then writing my own code.

Then he posted the code for a function handling a Wordpress filter, along with the question: "I get the following error. Why?" Within seconds, ChatGPT responded... Just as it suggested, I updated the fourth parameter of the add_filter() function to 2, and it worked!

ChatGPT took segments of code, analyzed those segments, and provided me with a diagnosis. To be clear, in order for it to make its recommendation, it needed to understand the internals of how WordPress handles hooks (that's what the add_filter function does), and how that functionality translates to the behavior of the calling and the execution of lines of code. I have to mark that achievement as incredible — undeniably 'living in the future' incredible...

As a test, I also tried asking ChatGPT to diagnose my problem in a prompt where I didn't include the handler line, and it wasn't able to help. So, there are very definite limitations to what ChatGPT can do for debugging right now, in 2023...

Could I have fixed the bug on my own? Of course. I've never had a bug I couldn't fix. But whether it would have taken two hours or two days (plus pizza, profanity, and lots of caffeine), while enduring many interruptions, that's something I don't know. I can tell you ChatGPT fixed it in minutes, saving me untold time and frustration.

The article does include a warning. "AI is essentially a black box, you're not able to see what process the AI undertakes to come to its conclusions. As such, you're not really able to check its work... If it turns out there is a problem in the AI-generated code, the cost and time it takes to fix may prove to be far greater than if a human coder had done the full task by hand."

But it also ends with this prediction. "I see a very interesting future, where it will be possible to feed ChatGPT all 153,000 lines of code and ask it to tell you what to fix... I can definitely see a future where programmers can simply ask ChatGPT (or a Microsoft-branded equivalent) to find and fix bugs in entire projects."

Do you have a GitHub project? Now you can sync your releases automatically with SourceForge and take advantage of both platforms.
Do you have a GitHub project? Now you can automatically sync your releases to SourceForge & take advantage of both platforms. The GitHub Import Tool allows you to quickly & easily import your GitHub project repos, releases, issues, & wiki to SourceForge with a few clicks. Then your future releases will be synced to SourceForge automatically. Your project will reach over 35 million more people per month and you’ll get detailed download statistics.
Sync Now

    • Re:

      Of course, I don't have mod points today.

    • by Anonymous Coward on Sunday October 15, 2023 @03:06PM (#63926907)

      In fact adding Unicode is simple. What is hard is to prevent abuse.

      At some point,/. did support unicode, but slashdotters used it to do all kinds of weird things, such as replacing the moderation field by (+7, Astounding). I cannot find the link to those posts anymore, perhaps somebody with superior google-fu can help?
      • Re:

        It can't be that difficult because almost every single other forum supports unicode characters without blowing everything up, somehow.

        • Re:

          But those other forums do not have slashdotters who WILL find out how to abuse this.

          • Re:

            haha, you're funny.... never underestimate a lot of users with a lot of time... people are trying to abuse all major platforms

            I'm not saying it's an easy problem to solve but it's not impossible either

      • At some point,/. did support unicode, but slashdotters used it to do all kinds of weird things, such as replacing the moderation field by (+7, Astounding). I cannot find the link to those posts anymore, perhaps somebody with superior google-fu can help?

        That's the point./. has supported Unicode for well over a decade now The problem is, Unicode is always evolving and Unicode is constantly adding new codepoints that need to be filtered out. Lots of example of Unicode-abuse, usually in the form of people pasting special characters that go and destroy websites.

        The most common form of abuse was the right-to-left-override where you can insert RTL formatted text in what would normally be LTR text (e.g., if you need to insert some Arabic in a block of English text). This would then set the text direction backwards when rendered on screen.

        Moderation abuse is simple to Google because of this - just look for "5:erocS" in Google - because after an RTL override codepoint, it will be reversed. (Hint: a Unicode renderer will render the "5" character, the move left, render a space, move left, render the colon, so you get "Score: 5". Follow it with a LTR override character and things appear normal again.

        Another one is overdecorated text - some languages are big on decorations, so those can be misapplied to other codepoints leading to text that a few million pixels tall and stretches above the line so you see a black line running down the page. Repeat this a few times and you can render a whole webpage black. Granted, you're also going to write a comment that's a few megabytes in size...

        • Re:

          I don't think anybody would complain about a compromise where only known codepoints that weren't subject to abuse were allowed. Since, again, this isn't a problem for any other website, I'm thinking this is a solved problem and such a list already exists.
        • Iâ(TM)m pretty sure other sites can render apostrophes.
    • Journalist discovers that ChatGPT understands program code months after everyone else.
    • Re:

      Why is slashdot literally the only site on the internet with this problem? Go ahead and find another one with this behavior.

    • Re:

      Please stop spamming.
  • and no, not all of them are in India yet. Also for anyone who aspires to go above code monkey but isn't a math genius who's really not a programmer, they're a mathematician using a tool, it means you're going to find it basically impossible to get a start.

    A huge sea change, a 3rd industrial revolution is coming. And there are no new jobs on the horizon to replace the ones we're destroying.

    I say "on the horizon" but this has been going on for ages. [businessinsider.com]

    Fun fact, following both the 1st and 2nd industrial revolutions there were *decades* of rampant unemployment until a combination of new tech and wars got us back to full employment. Those were "interesting times".
    • I don't think it will fit the same reason Cobol didn't.

      Who's going to drive it, give it prompts and then deal with the result? The job will still be called programmer.

      • Re:

        "I don't think it will fit the same reason Cobol didn't".

        I don't think the average person understands how much of our world (government agencies are the biggest user though you'd be surprised how much corporate code still runs in cobol) is cobol. It's significant.

        As a side note, why is Slashdot's comment editor still so shitty? I had to insert HTML linebreaks. That's nuts.

        • Re:

          I wasn't referring to Cobol as some dead language, I'm referring to it as the first.

          The point was it made programming super easy comparing to bashing bits in machine code (or asm if you were lucky), so instead of needing mega turbo nerds, business people could write the business logic.

          Well at know how that worked out. Turns out programming has a lot of figuring out a coherent spec from requirements and then implementing those. Cobol greatly eased the latter as have many languages since. But they are still u

      • Re:

        call it whatever but this guy:

        has absolutely no clue what he is doing. he does not know what programming is nor understands what a generative model is, yet decides to share his ignorance with the world by publishing an embarrassingly nonsensical article about exactly those two things, as "senior contributing editor" no less in a news outlet that is supposedly specialized in technology and innovation. if it's a joke it's a very cringey one. i understand/. has to make a living but who actually is supposed to

        • Re:

          I found it newsworthy as a developer because it's what managers are going to read and will thus set the bar for expectations of our work. It's not newsworthy for what the guy did -- and, in fact, did poorly, even with the AI assist. But "poorly" is still better than "not at all" and that's going to move the needle for us... quality wins when there's no cheaper option... quality suffers as soon as shoddy becomes available cheap. That's true in every industry I've worked in, alas.

          • Re:

            Shoddy is already available cheap. Just look at all of the data breaches we keep getting. Expecting some generative AI to fix it all is suicide. So expect some dumbass MBA to mandate it next week.

            What we need is to make the failures expensive for those causing them, but again good luck with that.

      • Re:

        Really? Because the job your describing sounds more like "manager".

        • Re:

          No it doesn't.

          Look at it this way: Cobol was the first of many many innovations making the act of writing code easier. The whole idea that coming up with easy high level descriptions and have the computer figure out what to do is as old as Cobol and FORTRAN.

          But it's still programmers figuring out what high level descriptions to use because going from wishes to a coherent technical description is ultimately what programmers do.

      • Re:

        > The job will still be called programmer.

        The early days of the industrial revolution the problem wasn't the loss of jobs. It was that skill levels of existing workers was no longer needed.

        People would spend years building up their trade and skills. They were replaced by children who could churn out faster using machines.

        The same is going to happen. You will still have someone who can be defined as a "programmer" but it no where near the skill levels you need now.

        There will be still roles for experts, b

    • Re:

      I'm not quite able to parse this. Did you leave out a comma after genius maybe? Are you saying if you're not a math genius you won't be able to be a programmer because AI will take your job, and only mathematicians using a tool will be programming?

      BTW, in my opinion being a mathematician, or thinking like a mathematician, is not particularly applicable to the nuts and bolts of programming, even when programming above 'code monkey' status, unless you're programming in Haskell.

  • A bit shirt sighted there perhaps. In a few iterations time it'll be able to write the code from scratch and franky when it can do that it could probably emulate whatever system you want directly.

    • Author Verner Vinge once said that in the future there will only be two branches of computer science left: code archeology (to dig up the already written library you need) and applied theology (choose the traits of the AI overlord you want to live under).

    • At some point in the future AI might write code (there's no clear reason that it won't), but these LLMs will not be able to, ever.

      To be more precise, because current LLMs are not Turing complete, they will not be able to write code that isn't just pattern matching from what they've seen before.
      • Re:

        > To be more precise, because current LLMs are not Turing complete...

        People aren't neither. So what?
        • Re:

          In case you are serious, this is a simple demonstration why you are wrong [xkcd.com]. Turn on your brain before posting.
          • Re:

            Perhaps he should have said.. "Billions of people are probably not Turing complete."

            • Re:

              lol well how would you do it? It probably saves on compute hours in the celestial data center for universe simulations. Full emulation would be wasteful and they just want the question anyway.
          • Re:

            Trippy, man. Here is an old argument for why "finite state automaton" might be a better description:

            https://sci-hub.wf/10.1007/bf00414025
          • Re:

            What does Turing completeness even have to do with anything in the first place? It was a statement you simply asserted while offering no evidence or justification.

            As for the rock cartoon I would pay careful attention to "I never feel hungry or thirsty" and "I have infinite time and space"... While comic book characters may have powers and abilities far beyond those of mortal men that doesn't mean mortals have those same abilities. Human attention is a limited quantity.

            • Re:

              We're talking about the capabilities of AI. The second half of this post goes into more detail [stephenwolfram.com].

              In the comic, the point is dumbed down to make it simple for people like you. If you'd like a more serious treatment of the topic, take a CS class or read a book.

  • I can definitely see a future where programmers can simply ask ChatGPT (or a Microsoft-branded equivalent) to find and fix bugs in entire projects."

    "ChatGPT, how to I remove the coding from Windows which sends telemetry without breaking the operating system?"

    "I'm sorry, Dave. I'm afraid I can't do that."

    • Re:

      Pretty much:

      > How do I completely disable telemetry in Windows 11?

      ChatGPT: Disabling telemetry in Windows 11 is not recommended, as it's an essential part of the operating system for security, diagnostics, and improving the overall user experience. However, you can reduce the amount of telemetry data sent to Microsoft by adjusting the settings. Keep in mind that some level of telemetry is necessary for Windows to function correctly and receive updates. Completely disabling it can lead to potential
  • I gave up and went back to my normal technique of digging through GitHub and StackExchange to see if there were any examples of what I was trying to do, and then writing my own code.

    "Programmer" is unable to write a routine to copy an array. Uses "AI" to generate code that he doesn't understand, but which crashes when he runs it. So then he searches the web to see if someone already wrote this code for him somewhere, copies and pastest it, maybe renames some variables, and says it's "his code" that "he wrote". Since it compiles and doesn't seem to crash, we're good to go.

    I think I see the problem.

    • > Uses "AI" to generate code that he doesn't understand...

      "Programmers" have been doing this for a while now. Instead of AI, they used Google to find some code they could copy/paste without understanding. AI is just making it a easier to do what people had been doing for a while in this regard.

      • It's easier to understand how existing code works than to create it from scratch. Hey, does that prove P!=NP?
      • Re:

        Except in this case, the AI wasn't able to solve the problem.
      • Re:

        I'm a AAA graphics programmer, I'm at the top of my game, 15 years experience on some big titles, a graphics engine that ships billions of dollars of games. When you use it properly, GPT absolutely rocks at programming for real world huge scale problems. You can quote me: it's insane to lowball GPT's ability. GPT knows intricate details about how to handle complex high performance code.

        If you are not prepared and it gives you a hallucination, or you try to let it lead, then yes, it can't code. One example -

    • Sounds like the standard modus operandi of your typical 3rd rate Lego brick method dev of which there are unfortunately far too many in our industry. Knowing their shit and being able to write working code on their own is a foreign concept to them.

    • all the time. Yes, programmers can write those routines. Easily. They do it every day.

      Anything you do every day you're gonna screw up occasionally. In an economy that uses as much software as ours "occasionally" is a *lot*.

      A lot of time and energy is spent finding those occasional screw ups. Time people are paid for. Time they won't be paid for anymore.

      Where is that money going to go? Is the CEO going to reinvest it? Or are they going to either pocket it and/or use it to buy out a competitor?
      • A lot of time ist wasted because people fail to write proper unit tests. That's what the generative model should answer: "what ist the unit test for this method?"
        • The test should be written first in most cases.
          • Re:

            This is always a frustrating response to me -- a complete unit test needs some knowledge of the internals of the function to know that all code paths got tested. The only tests I can write in advance are the ones that rise all the way to user requirements, which is more integration testing, usually. Yes, write as many tests as you can at the start and then get them passing, but, in my experience, that's rarely the unit tests.

            • Re:

              The initial unit tests can always be written first, because true unit tests test output given input.

              You know what the output should be for a given input, if you have to write the function first before you discover the possible outputs then your function is poorly defined from the beginning.

              Once you've got your tests for your success case(s) and your anticipated failures, then you can write your function, then _after_ that you might use coverage to see if there's a path in your function as it is written that

            • Every level of the system has an interface. Perhaps not literally, but logically. That interface has a limited set of behaviors, at least some of which are known ahead of time (because they are the reason for the function). Unit tests should always be written against interface behaviors, preferably one test per behavior. If you find more corner cases you can add more tests later, but the common case should be tested up front to verify the interface is easy to use. Do not write tests against implementations,
    • Re:

      What, you think this is new? I graduated in the early 2000s. Sometime around 2009/2010 one of my old professors was bemoaning the fact that students didn't want to write any code any more, they just wanted to copy and paste different blocks together until it worked. Coincidentally, stackoverflow started in 2008.

      Overall, I am huge believer in using chatGPT as support, today. I have used chatGPT to dramatically optimize SQL queries, suggest a new index, convert a legacy PHP program from Laravel 4 to Laravel 1

    • Re:

      Exactly. Just write the damn array code already. The only reason you should be asking for help is if there is some function that you don't know or remember or something like that. Does anyone remember to RTFM any more?

  • To be clear, in order for it to make its recommendation, it needed to understand the internals of how WordPress handles hooks

    No, to be clear, at some point ChatGPT was trained on text that dealt with WordPress hooks, and thus it had some relationship of tokens that was involved with what you wanted to know.
    ChatGPT has no "understanding" or computational knowledge about anything.

    I see a very interesting future, where it will be possible to feed ChatGPT all 153,000 lines of code and ask it to tell you what to fix... I can definitely see a future where programmers can simply ask ChatGPT (or a Microsoft-branded equivalent) to find and fix bugs in entire projects.

    Okay, so what exactly are we talking about? Syntax or behavior? If it is syntax then linters already do this, and they are built with the exact rules and best practices for that language. It is no black box, but something designed specifically to do that exact thing and do it very well. They can also reformat and fix code as well when it comes to syntax.

    If we're talking about behavior, then please tell me how you are going to describe to ChatGPT what the behavior of the 153,000 lines of code is supposed to be, so it will know whether or not there is something that needs fixing in the first place? Unless we're talking about something that could result in a total runtime failure, like dereferencing a null pointer or division by zero, then there's no realistic way to express to ChatGPT what the code is supposed to do. Especially when we're talking about that kind of scale that 153k lines are involved. How about breaking the code down into functions, and defining input and expected outputs for that function so that ChatGPT would then know what the function is supposed to do? Good job, you just invented unit tests.

    • Define understanding. If it can parse the question, parse the code explanation, parse the code and provide some kind of output from that then I'd call that understanding albeit maybe incomplete. Yes, so it's been fed a load of text , but then so were you when you learnt. And yes, you can site the chinese room as a counter example but what Lenrose didnt consider was it doesnt matter how it works inside, it's how it behaves outside that matters.

      People seem determined to think these LLMs are just dumb statisti

      • Re:

        This is an ongoing area of research, and there are some interesting findings.

        When you initially train an AI on a dataset, it starts as a statistical analyzer. I.e. it memorizes responses and poops them back out, and plotting the underlying vector space you see the results are pulled more or less randomly from it. Then, as you overtrain, the model reaches a tipping point where the vector space for the operation gets very small. Instead of memorizing, they appear to develop a "model" of the underlying opera

    • Re:

      Maybe you should try it out? Not on a 153k line program, but I've had great luck with pasting in the schema for ~a dozen tables and then having chatGPT optimize queries with 6-7+ joins, subqueries, etc.

      I think you might also be surprised at what chatGPT can analyze about functions and codes. I hesitate to use the word "understanding" but this is one of those areas where chatGPT can surprise you.

  • Writing code is something I'd expect a LLM to be able to do well given enough learned source. Feeding individual problems to the generator makes sense but I wouldn't want to feed it 10k lines of code and just accept the result. You would need to read and understand the code you're using. It would be somewhat similar to using a library from an external project, except you can't trust the source.

    • Re:

      No, because ChatGPT isn't Turing complete.

      • Re:

        What does that comment even mean?

        • Re:

          It means you should take some CS classes.
          • Re:

            I know what Turing complete means. What would it mean for one of these language models to be Turing complete? What does it mean for it not to be? Are you just saying that because ChatGPT can't update it's state model in the current version, it can't continuously learn? Or are you somehow saying that the entire language model concept cannot possibly be Turing complete? If the later, how in the world do you prove that?

            • Re:

              There are a lot of different ways to define it. One way is to say that it can't recognize whether a phrase is valid in a Turing complete language. A simple example is that ChatGPT can't tell you whether a long enough string of parenthesis is balanced or not.

              Wolfram goes into the topic in some detail in the second half of this post [stephenwolfram.com], you might find it interesting.

              "Language model" is a vaguely defined concept, but the current LLMs will need improvements in their algorithms before they are Turing complete (se

      • Re:

        Why does it matter that ChatGPT is Turing complete or not? Turing-completeness is important to run code, not to generate code.

        Plus I would be extremely surprised if ChatGPT wasn't Turing complete, most complex computer programs are, sometimes even when we don't want them to be. More specifically to ChatGPT, transformer networks only execute a fixed amount of steps to ultimately predict the next word. This is not Turing complete because you can't run loops, but ChatGPT has a context that can act like the tap

        • Re:

          Why did you write that? Are you saying that writing syntactically correct code is not important for writing code?

          Turn your brain on before typing.
      • Re:

        Neither are people.

    • Re:

      Exactly. AI is great when you can trivially verify the result to a complex problem, but not so great when the result is time-consuming or complex to verify. If you need a subject matter expert to verify the result and it’d take them as long as solving it themselves, there’s no benefit at all and a high likelihood of drawbacks as they discover errors in the result.

  • Sounds like a Chad moment to me

  • ChatGPT can shorten the time it takes a developer to do work, but it can't fix for incompetence.

    I used it the other day to help me shift some functionality from server side to client side, and the results have been very very good. Save me a lot of debugging time, even after I reviewed all the code by hand. I tested the functions and got the expected results right off the bat.

    Probably saved myself at least half a day of work.

    But I didn't ask it to do something large, I broke it down to manageable pieces that

  • If it's writing PHP, how can he tell it's writing shit code?
  • ChatGPT seems worse at producing working powershell code than it did shortly after it launched. It seems to make a lot more errors. It's still a timesaver to have it write code snippets, but those snippets must then be manually reviewed and tested because it often makes errors. Even something simple like asking it to extract the title out of HTML contained in a string, it wrote code that was basically perfect except that it forgot to escape one slash in the regex and thus the code it output produced a syntax error. An easy fix, but the error rate is so high that it's just a time saver at best.

    • It's very random at times. I've had chatgtp 4 as a subscriber since they released it for that, and it CAN be useful, but it can also be totally disasterous.

      In any case, you NEED to know something about what you are doing, and as a "human" you need to proofread A.I's results as well.

      It's very good at basic concepts such as initial code, specific calculation tasks and things that are in the known universe, but it really falls short when you try to describe what you want from the code. It's like if you're asking it to be creative like you can be, it can try - but it just can't, it's not human, it's not even "artificial enough", it's just a LLM - it knows what it knows from numerous documents, books and data it has been trained on, and it can't really think which many people misunderstand and believe it can, well - it can't.

      But can it correct code? Sorta yes. But also, it doesn't understand the general concept that you're thinking of when you make a piece of code, it can look for correct code that don't fail, but unless you specifically instruct it in what numbers or outcome you expect from it, it won't understand that, and you'll get some random results, sometimes they can be downright dangerous so use that output with care, read the example code ChatGPT gave you and see if you can spot some fatal things in there, you need to KNOW code and what you want, it's not a "magic piece" that will just code whatever you want.

      I've been using it numerous times to create artistic scripts for my Blender projects, and it's very hard work, no matter how much information you will give it, it will constantly get it wrong simply because you have to be SO specific about every little thing you want to achieve, it also doesn't have the latest data on debugging or recent compiler fixes etc, it often uses deprecated code to analyze your code, and chances are your code is better and more up to scratch so to speak.

      So use it... for simple small tasks. But if you screen the code you get, it can be a wonderful time saver, but it won't replace coders jobs anytime soon. Anyone who tells you this lives in "laa laa land" and have NO clue and probably haven't even used it extensively.

    • Re:

      Wait, what? You asked it to parse HTML using regexes?

  • "AI is essentially a black box, you're not able to see what process the AI undertakes to come to its conclusions. As such, you're not really able to check its work... If it turns out there is a problem in the AI-generated code, the cost and time it takes to fix may prove to be far greater than if a human coder had done the full task by hand."

    Umm... Don't you review/desk check your own code? Why wouldn't you expect to do the same with "AI" generated code?

    I've played w/ChatGPT generating code in a few langua

  • Ok... (Score:5, Insightful)

    by Junta ( 36770 ) on Sunday October 15, 2023 @03:28PM (#63926949)

    Instead of about two-to-four hours of hair-pulling

    Reworking those three lines of code to optionally accept two decimals should have been a 10 minute task, max. This may be a helpful tutorial for a beginner that already knows the problem and distills it into a digestible snippet, but it doesn't necessarily imply much about more open ended applications.

    This seems to be consistent with my experience, it can reasonably compete tutorial level snippets that have been done to death, but if you actually have a significant problem that isn't all over stack overflow already, then it will just sort of fall over.

    • Re:

      Thought the same. If it takes him 2-4 hours, then he clearly needs ChatGPT. It's not an attestation of how great of a tool it is, but rather it shows how poor of a coder he is.

  • by Mascot ( 120795 ) on Sunday October 15, 2023 @03:42PM (#63926969)

    I have many issues with how the abilities of this supposed "maintainer of code" comes across based on these citations, but let's chalk that up to a need for brevity and me being too lazy to RTFA.

    A more important issue I have, is he seems to believe ChatGPT understands how WordPress handles hooks. Unless something's drastically changed in how ChatGPT functions, that's not at all what it does. It answered what people have previously followed similar strings to the question with. That's all.

    Not that I don't think LLMs can be helpful tools, but for the foreseeable future it seems firmly seated in the "make a suggestion or two and have a human with knowledge take those into consideration" department, as well as a slightly more fancy snippets engine. I'd not worry about my job anytime soon. Then again, I'm old, my odds of being retired before AI takes over programming is above average.

    • Re:

      I’m turning 40 in a few weeks, and your assessment matches my own. I see nothing concerning here for me or my career. The proverbial “boss’ nephew” who “built the site in a weekend” that has dozens of console errors? He may not be around for much longer, but anyone who’s competent in the field has nothing to worry about.

  • ChatGPT is very weak at calculus (and also arithmetic). I asked to find the maximum of sin(x)/x, which led to verbose calculations filled with logic and math errors. Despite many hints, such as using l'Hospital rule, it would repeat the same mistakes again and again.

    • Re:

      This is because ChatGPT is a LLM--a large language model. It was not designed to perform mathematical operations. You are correct, it sucks at math, but that is not unexpected.

      • Re:

        It’s bad at logic or even just maintaining state. Try playing tic tac toe with it drawing the board. It declared itself the winner after making an illegal move (which was an impressive feat, given that we’re talking about tic tac toe) when I tried playing against it. I called it out, corrected the board state, had it repeat the correct state back to me, and then had it make more illegal moves. Over and over again. Never managed to finish the game.

  • but useful for inexperienced and out of touch developers. It is typically managers who are hyped about it.
  • Needing two to four hours of hair pulling to fix to three lines of code. Perhaps another career is in order.
  • cool, but if it takes him 2-4 hours to convert a piece of code to accept dollars and cents instead of just an integer I think he has more serious coding issues.
  • I tried it with a simple "convert date to Unix timestamp" function for an embedded project I am working on. Spent hours debugging other code until I got a look at ChatGPT's one and found it simply does not work.
    So, YMMV, and you have to double-check everything. To me, it does not look like something to debug another's code:).

  • How do we teach ChatGPT (and the people who trust it to write code for them) that you NEVER use a float type to store currency, because the precision limitations will cause problems even with values like $0.10 and $0.20 - even though they look fine (to humans) as decimals?

    Store the value in an integer as cents (and calculate with ints) and format it when you need to.

  • Is this article about 6 months late or what?
    • I have to say, that LLMâ(TM)s will be decent at programming only comes from the HUGE trove of free code available online, GitHub and HTML pages all over the web. Even then it still makes some rookie mistakes. However many other topics with less than millions of available pages to input it gets even less accurate at. Or at least, you really need to go check its work on.

      Summarizing papers and writing technical text may be the most useful time savers beyond programming in my work.

  • You hit it on the head with the blackbox point, but not because we don't know it's arriving at its conclusions but also because the more reliant we become on it, the less developers will have an understanding of the existing codebase and functionality.

    An employer doesn't always just pay you to code/bugfix, they pay you for your understanding of the codebase. Someone saying, 'I don't know how it works, chatgpt did it.' is an unacceptable answer.

    Saying I don't know how it works, chatgpt

  • [I] went back to my normal technique of digging through GitHub and StackExchange to see if there were any examples of what I was trying to do, and then writing my own code.

    Hence the code to debug is an assemblage of stuff posted to public forums, and ChatGPT was trained on that. It was fed with questions with offending code and their answers.Usual bugs, usual fixes.

  • Call me a luddite but I'm a bit against the current 'AI' that basically scraped the internet to build up its engine, but this is something that I always hoped for when it came to code generation.

    I had these ideas of somehow feeding it all the source material for a given language, compiler docs, the language docs, and rules, etc, and then being able to describe functions and have it generate the base code for it. Why? because coding for me was a path not taken. i did all the schooling, got a degree, and

  • Kinda sounds like it is doing what IDE hints and linting have done for decades, with a more conversational and thus less precise interface.
  • This chatgpt programer bullshit only works when the system is given a limited set of input. The platform literally prevents you from uploading several megabytes of multiple files, which would be absolutely necessary to give it context in order to solve any problem of significant scope. Instead, people are asking it to rewrite their 50-line functions to work with dollars instead of fuckwits, and then posting their magic results onto social media in hope of clicks, because well they are fuckwits.

    I'm bored of


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK