3

Ask HN: What are your thoughts on ChatGPT as a professional?

 7 months ago
source link: https://news.ycombinator.com/item?id=39213359
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Ask HN: What are your thoughts on ChatGPT as a professional?

Ask HN: What are your thoughts on ChatGPT as a professional?
17 points by demyinn 3 hours ago | hide | past | favorite | 19 comments
I’m going a new graduate starting an internship. It’s imperative that I do well to receive a return offer. In job search, I’ve built projects with Chat GPTs help. I didn’t tell gpt to just build me a project and I copy and paste but I genuinely use it as a “Jarvis” like Tony stark. That said, is there any stigma with using ChatGPT in the workplace? I don’t like to use gpt to code for me. I like to use gpt as an imaginary person who collaborates on projects. For example, I ask questions like, “I want to implement feature x. It should be used like this… my plan to implement this is to do a, b, and c. Is my structure a good and professional solution to this implementation?”

Usually, I get a really helpful and insightful response that optimizes my design. Sometimes I come up with a solution that might work, but gpt helps me come up with a better solution. This ends up teaching me so much because I actually absorb the information. Moreover, it helps me debug too. I find it as a shortcut to debugging. I see it as, I can spend 2-3 hours googling stuff, or I can spend about 45 minutes talking to gpt to pin point the bug and come up with a viable solution. All that said, what are your thoughts? Am I too reliant on it? Is this a “healthy” relationship with AI? I want to be a strong engineer. I don’t want to pick up sloppy habits and become a poor engineer. My perception of gpt is it’s a new resource to take advantage of. I need some input. What are your thoughts? Thanks!

I find it excellent to talk to it to learn the basics about something I don't have much experience in. Its broad knowledge helps a lot with that. When it comes to finding super accurate, or updated, or state of the art things, it's not great, but it definitely can lead you in the right direction. And this removes so much friction from trying new things.

Note: I'm a technical freelancer, mostly doing code but also some 3D modeling, game development etc

I find it useful to handle things like creating diagrams and describing architecture, since I have some trouble converting what’s in my head to diagrams on paper. I can verify if the output matches my description of the requirements and make any surgical changes to the UML by hand.

I don’t use it for programming though, and I do worry about otherwise talented engineers using it as a crutch. There has been more than one occasion in code review where an answer to one of my comments was “well, this is what copilot said…,” which I take a pretty dim view of. In that sense it’s like the early 2000s when people started thoughtlessly deferring their intelligence to sat nav.

I find it excels as a "rubber ducking" partner, as a replacement for "tldr" on esoteric utilities whose documentation is either lacking or overly verbose, and most especially as a non-technical tool for generating concepts you may not have thought of (e.g. "what aspects should I be considering when designing X").

Most of these use cases work around the "problem" of hallucination (except perhaps the 2nd one); it's ideating for you and you judge what's useful or not. As such, it's one more productivity tool I feel people should learn to use, with the relevant understanding and care.

I find it's great for very generic work, like boilerplate or where you have forgotten how to implement a specific single, well defined algorithm. However, relying on its output is very risky if you aren't verifying everything it puts out. A good test to show this is to ask it to write a short essay on something you know very well, then behold all the incorrect information it enthusiastically tries to feed you.

Also:

* It has a horrible habit of inventing properties on objects or methods in libraries.

* It will very happily straight up lie to you about things it does

* Often when you ask it to make a specific change, it will give you back exactly the same as last time.

* For the love of God, don't put any company-owned code into it.

* Maybe I'm just bad at prompting...

No stigma, just don't get too reliant on it, some of the stuff it spits out is terrible or entirely made up gibberish that doesn't even work.

Use it to save time. Stuff like writing boilerplate, documentation drafts, summarizing things, looking up stuff you'd normally use stack overflow for, things like that.

It’s a tool for personal productivity, just like spreadsheets and python scripts. I use it as a super dictionary, to write text and code or just satisfy my curiosity. It’s useful when I am knowledgeable enough to verify the output. It’s a technology that augments my abilities, which is what technology should do.

But I never let it speak for me, and I find it rude to make someone take 5 minutes to read something I “wrote” in 5 seconds. Technology should make us better humans. It should amplify our best traits, and our ability for bullshitting isn’t one of them.

It’s a tech demo, data acquisition and research/prototyping operation for OpenAI that’s highly subsidized and aimed at building a qualify dataset for GPT 5+ - effectively using their initial models trained in an unsustainable fashion and only offering temporary moats into investor narrative and a combination of new data and users, both potentially new moats and closing the gap to google.

It’s not a serious product.

> is there any stigma with using ChatGPT in the workplace?

I work in a big tech company. ChatGPT (and equivalent systems) is becoming pervasive in developers tooling. There's no stigma, it's used is actually encouraged.

I don't think it's a life changer or that it'll save engineers the time to learn their craft.

s.gif
>There's no stigma, it's use is actually encouraged.

At companies I've worked at, it's actively encouraged _if you follow the rules_. Specifically, you shouldn't be using SaaS LLMs like ChatGPT, Copilot, etc with your personal account for business purposes. That's likely a violation of your NDA.

However, it's absolutely encouraged to use the tools that the business blessed. That's what they're there for. Local LLMs seem to depend on licensing for the weights, etc but are also kind of a "don't ask don't tell" situatoin.

Of course, if the business hasn't gotten an approved tool by now... well, in that case there are going to be lots of employees using whatever they want, and you should probably be looking for a new job because that business is behind.

Also I wouldn't agree there's "no stigma". Maybe I'm backwards, but I know some people that use it to just make up word vomit for peer evaluations which I definitely judge them for.

s.gif
Can you explain how you use it? Every time I try to use it for programming, the output is somewhat nonsensical. I also tried copilot but that was even worse (for terraform and python)
s.gif
I use for well defined tasks which I know I'd find the answers by looking at documentation (and which I know how I'd approach but can't remember the details), so instead of rummaging through libraries' docs (or API docs), looking into a language's syntax I don't use everyday (looking at you, SQL JOINs), etc. I will craft a prompt to ChatGPT to get me through the 80% of looking into docs.

It will eventually hallucinate some property that doesn't exist, but since I kinda know what I'm looking for and just need the information condensed to move further I'll then verify the API/library docs with the given output and find my way through much faster than I would by starting on my own.

To me it does not become a code generator aside from boilerplates and starting points in languages I'm not an expert, it even helps me a lot to learn a new language (like Rust) where I can get a simple skeleton out of it for a task I want to do and code on my own later, just returning to ChatGPT to ask things like "how could I open a file with write access in <language I don't normally use/I'm learning>".

It's never helped me with any business logic properly (and I don't feed it my company's code, I will write a more generalised case as a prompt, or swap to an analogy to what I'm trying to do), it probably needs a lot more context that I'm not happy nor comfortable to feed into OpenAI's training dataset.

I'm been playing with Mistral's models locally through ollama and it's quite promising what one can do with a local model that you can feed a lot of context without caring about where private data is being stored, I see a big future if those models keep evolving as code assistants.

like all good things, just use it in moderation.

you dont use a hammer for all carpentry work.

> is there any stigma with using ChatGPT in the workplace?

Any? Sure. At the organization level, there are workplaces that completely ban ChatGPT and will fire you for using it, and there are workplaces that buy all their engineers a subscription to it and actively encourage using it. At the individual level, there are people who make heavy use of it, people who can't stand it, and (most commonly, imo) people who have nothing against it but don't personally find it very useful.

Don't use it if your company bans it, but otherwise do whatever you want and maybe check with your coworkers about how they feel towards it before revealing to them that you're using it, just to be safe.

> Am I too reliant on it? Is this a “healthy” relationship with AI?

Based on what you've said, your relationship with it seems perfectly fine. The concern with coding AIs is that novice programmers will encounter a problem they can't solve on their own, ask an AI to solve it for them, and copy and paste the output without understanding the problem or the solution. In the best case, the code can work but have subtle flaws that won't show up until later. In the worst case, the code doesn't even make sense and you annoy your peers by making them review zero-effort garbage.

But if you're just using it to further explore the solution-space of a problem that you already know how to solve in at least one sensible way, there's no danger there and you're just learning and improving. Just be sure to fully understand any AI solution you encounter before internalizing it as a lesson.

If we froze all coding AIs right now and prevented them from continuing to improve, I'd think you'd find them becoming less and less useful to you as you gain experience. Obviously, they do much better solving common problems with lots and lots of public solutions already available, and more senior engineers don't tend to need help with those or have much to learn about them. That being said, I know plenty of very capable senior engineers who use AIs to generate repetitive boilerplate (often tests) in verbose/inexpressive languages like Go. Anyway, depending on how quickly these AIs continue to improve, you may never get the chance to "grow out of them" and they might be able to keep teaching you increasingly complex things until they finally replace all of us. Who knows!

LLMs are too new, and too much in flux, to answer this question well. I can give an answer now, and in two months it might be obsolete. So all I can do is say something generic like:

It's a productivity tool that will find it's place alongside similar tools like intellisense and linting. Just like those tools it will change the way people code - they will need to hold less info in their mind, and that can potentially allow them to reach greater heights, or learn faster. Or, it could potentiate laziness instead. That's down to the individual (and applies to more than just coding, LLMs are equally used in fields like art, translation, marketing, copyrighting etc.).

As for my personal usage. Here's a couple of thoughts that might be obsolete in two months:

* It's awesome for looking up basic algorithms like sorts

* It's pretty great for doing simple things with well established code bases (e.g. React)

* It's even better for doing those things if you are already an expert so that you know how to guide it

* It's pretty terrible for doing anything with more obscure/recently released code bases, whether you are an expert or not

* I have heard it suggested this will make people want to avoid working with less well known libraries. This remains to be seen I guess

Thoughts on what this means in a "professional" career:

1. If your professional career brings you down well trodden paths (e.g. writing React apps) it will probably be a big part of your work

2. If your professional career leads you to working on obscure systems, or even writing those systems, it won't be as useful for you except as a reference for algorithms, or for writing comments (this is where I have been recently).

However, if someone can figure out how to retrain the models constantly on new info so that even the newest of releases becomes a part of the training data just as fast as they would show up on a Google search, that will change things a lot. Likewise, if it can be trained on a your own personal obscure codebase (I think this might already be possible?) that would be a big deal.

Finally, as for stigma. I don't think so. There are privacy issues but these can be worked around by running a local LLM and these are getting better and better. If you have a graphics card with 16gb of RAM I think there's already models that you can run locally that are similar to GPT3.5 performance.

As a recent graduate (in SE/CS I presume) you should have better quality knowledge than exhibited by the volumes of tutorial materials that ChatGPT was trained on.

I have decades of experience as a SE and taught postgrad SE courses at uni. And I have experimented with ChatGPT, BING. With well thought out questions, LLMs return better answers than Google, etc searches. But not always. Distrust and Verify!

Somebody described LLMs as "Endlessly Enthusiastic Savant Intern". Just don't try to get it to do high-level systems design or to understand business domains, requirements, etc. If you treat ChatGPT as a gofer to research information, then you can certainly save time. But you do need to apply your judgement and knowledge to ensure that what you produce is appropriate and functions correctly.

In due course, you will gain experience and expand your knowledge horizons with your professional experience. By all means, continue to read widely and in-depth. Never forget that ChatGPT, etc are just one of many tools that you use.

In my father's day, proficient use of the slide-rule was mandatory. For me electronic calculators and computers were the tools of trade. For the emerging generation of engineers, AI is the contemporary tool. We are yet to glimpse what will be next. However, foundational knowledge in your discipline with on-going learning remains the essential super-power.

ChatGPT will become irrelevant somewhat soon (at this rate). Warning; not a professional opinion.
s.gif
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK