1

How do you use AI development tools?

 1 year ago
source link: https://lobste.rs/s/dqz1uk/how_do_you_use_ai_development_tools
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

How do you use AI development tools?

I’ve tried Github Copilot, ChatGPT, and Cody to diagnose problems in existing codebases and write code in unfamiliar frameworks and languages. The most generous word I would use to describe my impression of these AI tools is “uncanny,” but apart from that I can’t say I’ve found any of them reliable enough to use for anything more than give brief summaries of publicly documented concepts not readily available elsewhere, and I’m not sure I can even entirely trust those. I find the task of correcting their wrong code and ignoring their bad or hallucinated advice more arduous than just R-ing TFM, using traditional debugging tools, and writing my own code with only the autocompletes provided by language servers. Maybe I’m turning into an old fuddy-duddy, clinging to the romantic delusion that there’s a “craft” to programming, and not really using these AI tools in a way where they can really shine.

Who here has incorporated AI tools into their development workflow? How do you use them? Anecdotes welcome.

  1. I cancelled my Copilot subscription pretty quickly – it almost never provides anything useful. Maybe I was doing it wrong, but I mostly found it to be annoying

    As a self-taught engineer, I have had a lot of luck using ChatGPT to explain certain CS concepts to me – the back and forth conversational format has helped to make certain things “click” for me for the first time, especially because I can ask for pseudo-code or toy implementations to illustrate an idea. I’ve also had some luck getting it to explain/summarize what e.g. some function in an open source codebase is doing or why it’s there, or what some abbreviation in code probably stands for (it’s insanely good at the latter). Usually when my confidence increases with the codebase, I find that those explanations were mostly right, or right enough for where I was confidence-wise at the time

    In my day-to-day work, I usually have ChatGPT write simple but tedious-to-write functions or regular expressions. Sometimes using it for ideas or common patterns to achieve XYZ works out well, other times less so. Having it implement anything more complicated or involving a fast-moving API or getting into the nitty gritty implementation details of something nontrivial tends to be a disaster.

    On the whole though I find ChatGPT to be a pretty useful tool, but I still reach for API documentation or source code first when trying to understand something new

    Edit: One other task I use it for whenever I’m doing data work is to generate CSV data or DDL statements to play with, or a Python script to generate test data with a particular shape

    Finally, I use it with the WolframAlpha API to play with ideas or cook up numerical examples / plots for my research when I’m not at home

    It’s important to be critical of your interactions with it and remember that it’s “just” a text prediction engine.

  2. I use GPT4 to both generate code and do various small tasks while programming. For code generation, I just use it to make a first draft of scripts. Then I manually tweak it to do what I want. It’s most useful when I’m using an unfamiliar API (like I want to control Spotify) or an unfamiliar language that I can then muddle through. That’s how I wrote the C++ race condition in this post. More recently, I did this:

    Write a python script that 
    
    1. reads an XML file
    2. finds the first "animation" element
    3. reads the "svg" tag of that element, which contains a path to an svg file
    4. updates the animation element's content with the content of the svg file.
    

    It was close enough to what I wanted that I saved more time reading and modifying it than figuring out how to write it from scratch.

    Most of the time, though I use GPT4 for small auxiliary tasks. Some examples:

    1. “In the python library dagster, where is the class that manages the workflow DAG defined?”
    2. “Write a vim regex that converts {thing} to {thing}”
    3. “Below is a list of libraries. For each library, give the link to the API documentation site and the github repo, if any.”
    4. “Git search for diffs that added or removed a string”
    5. “Convert this verbal description into a PlantUML diagram.”

    It takes some creativity to use well, but once I got the hang of it it became one of the most versatile tools in my toolkit. I once described it as “a genie that can make any tool you describe, except shitty.” Turns out that comes in handy more than you’d think!

  3. My concern with using ChatGPT, Copilot, and etc. is code ownership given the outcome of a recent case with AI generated artwork used for a comic book. So for now, I don’t. I’m also wondering if certifying your patch/pull request/whatever did not involve the usage of an LLM-based tool will become a thing.

  4. luke8086

    edited 9 hours ago

    | link

    Maybe I’m turning into an old fuddy-duddy, clinging to the romantic delusion that there’s a “craft” to programming, and not really using these AI tools in a way where they can really shine.

    IMHO, the job of an engineer is to get past their hopes, wishes and delusions, accept that reality is complicated and messy, and just try to make the best of it.

    At least for now, LLMs are heavily overhyped, but it’s another tool in a toolbox that may get useful from time to time.

    In my case:

    • I use Copilot as a fancy autocomplete. I don’t expect it to be right, but in practice it spares me some typing
    • I sometimes use GPT for generating sample data, e.g. for tests
    • From time to time I ask GPT when I broadly know what I want to code, but don’t remember specific details like syntax/APIs/whatever. I don’t expect the output to be fully correct, but it often gets me on the right track without googling or going through the docs
    • For example, one time I asked it to generate me a regexp from a bunch of sample strings, because I didn’t remember some less common features, and it did an absurdly great job, including explaining all syntactic elements one by one
    • As broken as GPT’s code can be, sometimes it’s able to point out bugs in my own code :) I don’t usually use it for error checking, but it’s one area to explore
    • Things like generating text from bullet points, correcting grammar, etc.
  5. I have a hammer on my desk and I know where to find the building’s circuit breakers. :-P

    In all honesty, I could probably benefit from using it to conjure up data or code for integration tests. I’d probably reach for a dynamic fuzzer like AFL first though. I deal with the hype mostly by tuning it out, I’ll come back to it in 3 years and find some sort of killer application that makes my life way easier.

  6. Ameo

    7 hours ago

    | link

    I use Copilot extensively in my day-to-day workflow, and I use GPT-4 via ChatGPT regularly to generate and modify code.

    For the most part, I use Copilot as a super-powered autocomplete. I usually know the code I want to write, and would write something similar myself. However, Copilot is faster than me typing it out manually - especially in the case of boilerplate-y stuff and test code.

    In the case that I’d want to look something up online, I’ve found that appending a comment and letting Copilot take a guess often works out. Example:

    display: flex;
    /* avoid text wrapping and overflow and truncate with ellipsis */
    < copilot fills in 3 lines of CSS to accomplish this >
    

    It’s overall a huge timesaver and reduces tedium while coding. However, I’ve also had cases where I get a huge amount of value out of letting GPT-4 generate a ton of code out by itself from a verbose prompt.

    One example of this where I’ve found big success is implementing fundamental frequency (f0) detection for audio signals. I asked ChatGPT about existing algorithms, did some research, picked one, asked it to implement it, asked it to make some changes to fit my use-case, debugged it, and the end result worked great: https://chat.openai.com/share/2d6ecb7c-5cce-49e8-a754-a35e2e9049b0

    I’ve had similar success with asking it to generate me code for 2D graphics applications. Here I pasted it some of my existing code, described the new feature I needed to add, and it generated me valid Rust code that computes the lengths of splines using some calculus: https://chat.openai.com/share/61aa641d-05fb-4e21-8fbf-7e72ee44ce48

    I went back and forth with it a few times to change the code and iron out a few issues, but the end result works for me. This is something that I’m sure others would find easy, but I could see it easily taking me >5x the amount of time to research how to do that and implement it myself.

    I have dozens of other examples like this. I’ve gotten much better at adding the right amount of guidance and context to my inputs to get the kinds of outputs I want.

    It’s also great for generating little Bash scripts and one-off command lists for setting up DB backups, running data exports, automatically mounting new hard drives, etc.: https://chat.openai.com/share/886ec2c8-1621-4849-8c17-33ae7efd6c20

    Again, it’s all stuff that I’d be able to figure out using online resources or existing knowledge, but ChatGPT is almost always quicker and easier and usually does a great job.

    I find the task of correcting their wrong code and ignoring their bad or hallucinated advice more arduous than just R-ing TFM

    This is not my experience at all. I’m often quite verbose with my prompts - for ChatGPT at least - and as I mentioned I’ve developed tricks for interacting with Copilot more effectively. I’ve found that the best way to use Copilot isn’t to just let it generate a huge block of code on its own like in the fancy demo videos; it’s to use it as a super-powered, context-aware, intelligent autocomplete.

  7. I use ChatGPT to figure out why I’m doing something, sometimes. Or if I need to do something that isn’t done often, example prompts:

    “python datetime vs time.struct_time? concise no context” (super necessary command)

    “python struct_time datetime conversion? concise no context”

    “functools itertools example code for polynomial rolling hash function? concise no context”

    usually it doesn’t generate what I want, but it does generate something that leads me the right way. I also use it to solve problems in a set theory textbook.

  8. I have used DALL-e to generate an icon for my project. That’s been really helpful as I wouldn’t be able to do it myself. As for writing code I haven’t really tried any tools yet.

  9. feoh

    8 hours ago

    | link

    I use GPT for generating test skeletons and to help get me past the tabula rasa paralysis I sometimes get when approaching a problem.

    Even if what it spints out is UTTERLY WRONG, the starting point makes it easier for me to understand what needs doing and make it happen.

    I don’t personally favor using it for code completion, I’d much rather have an editor or IDE that has a deep understanding of my codebase and helps with things like possible method choices, parameter types and hints, and the like.

  10. I use cody https://sourcegraph.com/cody to teach me a new open source project. Without cody, I need to learn the language, the framework, and the project structure seperately and combine it on my own. Cody can put relevant files into context and answer my newbie question.

    1. Really awesome to hear this. Going to pass this along to the team.

  11. I’ve used it for a variety of purposes, many outlined in the other responses.

    The time I use it to substitution instead of creating a regex to do it in an editor has been helpful.

    Something like “Given this list … “, insert the first column in the marker “{{first}}”, the second column in the marker “{{second}}”, etc. and it will give me a list. I can add conditions that would otherwise be difficult to impossible for regex to do also.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK