2

Ask HN: How are you using GPT to be productive?

 1 year ago
source link: https://news.ycombinator.com/item?id=35299071
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Ask HN: How are you using GPT to be productive?

Ask HN: How are you using GPT to be productive?
208 points by yosito 5 hours ago | hide | past | favorite | 194 comments
With GPT so hot in the news right now, and seeing lots of impressive demos, I'm curious to know, how are you actively using GPT to be productive in your daily workflow? And what tools are you using in tandem with GPT to make it more effective? Have you written your own tools, or do you use it in tandem with third party tools?

I'd be particularly interested to hear how you use GPT to write or correct code beyond Copilot or asking ChatGPT about code in chat format.

But I'm also interested in hearing about useful prompts that you use to increase your productivity.

I paste whole blocks of code into it and ask it to improve it, like make it simpler or reduce duplication. If I have a straight-forward 'thing' I need to do, like, break a file up into chunks of a certain size, I'll ask it to produce that code. So, scenarios where there's a clear-cut task. I recently had to write an SDK in a bunch of languages, I had it convert most of it from one language to another without a huge amount of refactoring/tweaking.

I exported all of my trades into CSV format in 3commas, and asked it to generate the Python code to analyse various hypothesis for that data, which I then pasted into a Jupyter notebook.

It's incredible how much time it's saving me day to day already!

So much:

- generate AWK to transform text

- modify python script to make it multithreaded (when this worked first go I was very mindblown)

- explain concepts that I half understand in clear language, eg explain sharpe ratio and show me the math

- “what do people usually call” type questions. Like what are common job titles for people who do X and Y?

- proof read my emails and make suggestions given a stated goal

- settle language/choice of words discussions by asking GPT to reverse pitch understanding, then choosing the one that’s most aligned with the point we’re trying to make

- generally linux-y commands and explanations “best way to remap keys in i3” or find file with content “xyz” with tool faster than find

For coding, I've been using it like Stack Overflow. It really decreases my barrier to doing work because I can ask lazy follow-up questions. For example, I might start out by asking it a question about a problem with Pandas like "How do I select rows of a dataframe where a column of lists of strings contains a string?". After that, GPT realizes I'm talking about Pandas, and I'm allowed to ask lazy prompts like "how delete column" and still get replies about Pandas.

I also use it for creative tasks - for example I asked it for pros and cons of my cover letter and iterated to improve it. I also used it to come up with ideas for lesson plans, draft emails, and overcome writer's block.

GPT has drastically lowered the emotional-resistance barrier to doing creative tasks and improved the quality of my output by giving me creative ideas to work with.

s.gif
So on my coding problems I haven't had much luck. It doesn't seem to know Bazel, the Rust code I asked about was completely hallucinated, but it did solve a problem with Azure DevOps I had.

I think if the training set did not contain enough of something it can't really think of a solution.

What is really nice though it's as you say the refinement of questions. Sometimes it's hard to think of the right query, maybe you're missing the words to express yourself, and to chatGPT you can say yes, but not quite.

s.gif
This could mean the future goes one of the two ways. Engineers get lazy and converge on using only programming languages which AIs understand or have been trained on, or we forget about this waste of time and work on more important problems to solve in society other than the lack of an AI to be our crutch. Sadly, I think the former is more likely.
s.gif
i think most people will just keep programming the way they do, and the AI hype will mostly die down. People have been saying that C++ is dead for decades, yet here I am writing code in it with a big community of others who do, too.
s.gif
Yeah, I gave it a simple task of encoding a secret message in a sentence by using the first letter of every word. Hello = "Hey everyone, lick less onions". I worked with the prompts for over an hour to try to get it to complete the task, and while I did have some success, it really struggled to reason about the task or provide a valid response. If it can't even reason about a child's game, I can imagine it struggles with a programming language it has barely seen. I don't think it's actually reasoning about things at all, just providing a statistically plausible response to prompts.
s.gif
> I don't think it's actually reasoning about things at all, just providing a statistically plausible response to prompts.

It turns out that humanity’s problem might not be that AIs can think but rather that humans believe that AIs can think. One might even go so far as to say there’s a real danger that we hallucinate that AIs can think, to our detriment.

s.gif
My experience is almost completely the opposite. My likelihood to dive into something new is significantly higher now.

It might help to approach it from top down? Usually, if I'm asking a technical question, I want to apply my deeply understood principles to a new set of implementation details, and it has amplified the heck out of my speed at doing that.

I'm kind of a difficult to please bastard, a relatively notorious meat grinder for interns and jr devs, and still I find myself turning to this non-deterministic frankenstein more and more.

I’ve been actively using it and it’s become my go-to in a lot of cases - Google is more for verification when I smell something off or if it doesn’t have up to date information. Here are some examples:

• reviewing contract changes, explaining hard to parse legalese

• advice on accounting/tax when billing international clients

• visa application

• boilerplate django code

• learnt all about smtp relays, and requirements for keeping a good reputation for your IPs

• travel itinerary

• domain specific questions (which were 50/50 correct at best…)

• general troubleshooting

I’m using it as a second brain. I can quickly double check some assumptions, get a clear overview of a given topic and then direction on where I need to delve deeper.

Anyone who still thinks that this is “just a statistical model” doesn’t get it. Sure, it’s not sentient or intelligent, but it sure as hell making my life easier. I won’t be going back to the way I used to do things.

Edit: bullet formatting

s.gif
100% this. It's also game changing for learning a new language (of any type, not just programming), any of the boring parts of software engineering like most programming tasks (it's like a personal intern -- sure you have to check their work and the quality is all over the place but still, dang I love it), and even a bit of therapy.

At worst/minimum, It's the ultimate rubber duck.

(To be clear, I'm exclusively using gpt-4)

s.gif
I'm excited about the potential of travel itineraries once extensions are available. What if I can tell it where I want to go, and it could just handle picking the best flights and accomodations for me and I didn't have to spend any time searching airline or hotel websites. I'm curious to know more detail about how you're using it for travel itineraries now.
s.gif
The real time applications are a game changer. I haven’t dabbled with that yet! Pasting things from emails and summarising - then keeping in my notes app. Also for planning out days when on holiday.
For compliance, I have been using it to complete cheesy "security training" videos and quizzes that we are forced to watch in the organization for insurance purposes. The videos are so bad, the training is ineffective anyways. We used to load them all on mute at the same time every quarter, click through as fast as possible to get them out of sight, which is considered a metric for how valuable the videos are (how much we need to improve). ChatGPT usually gets it right! Hooked up to Playwrite.
s.gif
How does this work exactly? What is ChatGPT doing for you?
Generally rewriting emails for clarity... but I found another neat use of GPT-4.

For public APIs, I ask to make sure its aware of the api. Then I ask for endpoints. I find the endpoint I want. Then I ask it to code a request to the endpoint in language X (Ruby, Python, Elixir). It then gives me a starting point to jump off from.

Thirty seconds of prompt writing saves me about 20 minutes of getting setup. Yes, I have to edit it but generally it is pretty close.

s.gif
You reminded me: I discovered that ChatGPT had invented an API for me. Has that happened to you yet?

Since it went to the trouble of writing code for the API as well, I contacted the API developers to follow up about the topic. The code given was kind of a hand-wave anyway so I'd need to polish it up.

The developers were surprised to hear they had an API. In truth, there was no such thing.

I then found myself in one of those awkward "welp, guess I can keep my job" conversations...good for them, but for me: Go home, no API here. A disappointment with some meta-commentary sprinkled on top.

s.gif
That's just asking for their API to be implemented by some bot. Not sure they really get to keep their job.
s.gif
> Generally rewriting emails for clarity...

This is the sort of thing that will force a lot of legal teams to shutdown access to GPT-4 api/gui from internal networks.

Ppl never think of unintended consequences.

Ask it a prompt fine but don't provide internal information as an input.

s.gif
This is one of the causes there's a push to run your own engines for large language models: if you run your own service you can control the environment, data and reproducibility.
s.gif
This is exactly what my employer is doing, they pay so that our internal data (from employee queries) does not become part of the model. They've blocked the public chat gpt etc.
s.gif
Yeah, I have found I need to be careful. When I have used it, there is no confidential information in the email. I do pay attention to that.

That said,I think it will be interesting as Microsoft introduces this into Office 365. You bring up a great point. Most people will not realize they are sending potentially confidential information to Microsoft.

Perhaps it's no different than Grammarly... But I think you are right that legal departments are going to be all over this.

s.gif
They already are. It is 99% stored on a Sharepoint on a Teams site anyways
s.gif
What does this mean? I'm unfamiliar with Teams, the only person I know who uses it is my partner who works for the government (non-technically).
s.gif
It means quite literally what it says - if you have office 365 you most likely have all your data in the MS cloud sharepoint. MS also has a separate government cloud.
s.gif
A couple more years of chip improvements and it may run self contained within a device.
s.gif
Get ready for ChatGPT: Enterprise edition! Now with SOC 2 compliance!
s.gif
> Generally rewriting emails for clarity

I built a free ChatGPT chrome extension that integrates with Gmail for better UX: https://chatgptwriter.ai (300k users so far)

s.gif
300k users is insane. Is it BYO key? Otherwise how do you handle that much load for free?
s.gif
Looks like it's scraping the chatbot? You have to login to your chatGPT account?
s.gif
Great extension! I used it recently, and had some trouble drafting email reminders (to respond to an email). Do you have any tips on how I could do that with the extension?
s.gif
chatgpt isn't compliant with any regulation including gdpr how much private data are your extension's user sending there?
s.gif
This is exactly the kind of thing I hope LLM chatbots will be genuinely useful for. Though, how often do you find it completely hallucinating endpoints / parameters etc. ?
s.gif
I use it for similar things as GP, and find its strengths to be similar too.

ChatGPT hallucinates SVG path attributes. Ask it to make an svg of a unicorn - it will give you markup that looks okay, but if you look at the values of the paths, it's clearly gibberish.

(SVG is a particularly interesting case because it's XML on the outside, but several attributes are highly structured, esp g.transform and path.d. Path.d is basically the string of a Logo-like programming language. I was specifically looking at these attributes for realism, and didn't find it.)

s.gif
Great question. If you ask it for an API endpoint that is described online but isn't well documented publicly, it seems to default back to what it thinks you should do. For example, in one example, it hallucinates that you need a bearer token.

I don't know whether that is because that is a common way of doing things or whether a previous prompt responded with a bearer token... But it wasn't right.

For me, it's a leaping off point that often saves time if I ask the right question. To your point, you have to be quick to know enough about the API to deduce whether you and Chat GPT are in the same universe.

s.gif
Can you provide an example of what prompts would you use?
s.gif
Here is a good example:

1) Use Chat GPT in GPT-4 mode. I have found GPT-3 doesn't work in the same way.

2) I ask "What APIs does EasyPost have?"

It will respond with 7+ API endpoints

3) I ask "Can you write code in Ruby for the rates API?"

It responds almost perfectly with workable code from my experience in Ruby.

4) Then I ask "Can you give me that in Elixir?"

It responds with something I think is about 90% right. I am not as familiar with it but it seems close.

I am not trying to replace myself... I am just trying to make my job easier. And this seems to do it.

s.gif
Note: I tried with GPT-3.5 and it doesn't respond with all the same APIs available. That said, if you want to try the above... It appears that the rates api isn't available in 3.5 but if you follow the example through.... it will still produce nearly identical code for the rates API even though it doesn't say that it is there.
s.gif
please send your inputs. cute stories are whatever.
My favorite uses are:

- Interactive debugging. Yesterday, for example, it helped me debug some tricky CSS issue where it gave hints and after 6 times back and forth, the solution came up. I had to explicitly set `-webkit-appearance: none` for styling sliders in WebKit browsers; this wasn't the case for Firefox.

- Checkout definitions. I have a small tool (https://github.com/rikhuijzer/ata) available on a keyboard shortcut and use it to quickly checkout definitions for words when I come across a word that I don't know.

- Writing jargon and suggesting edits. I let it write parts of my paper. ChatGPT is way better than me in adhering to the jargon of the field and also gives useful suggestions for small things that I should add and makes sentences easier to read.

- Having an assistant. As cliche as it may sound at this point, I actually agree that ChatGPT feels like an assistant which thinks with you and is there to fallback on.

> But I'm also interested in hearing about useful prompts that you use to increase your productivity.

Just like Greg demoed in the GPT-4 developer livestream, I just ask the question in the first paragraph and then throw in as much information as possible after that.

I often use it as a thesaurus. "Words that mean X" or even "that situation X me and I was annoyed - give me options for X"

For programming, all sorts of things. I use it all the time for programming languages that I'm not fluent in, like AppleScript or bash/zsh/jq. One recent example: https://til.simonwillison.net/gpt3/chatgpt-applescript

I use it as a rapid prototyping tool. I got it to build me a textarea I could paste TSV values into to preview that data as a table recently, one prompt produced exactly the prototype I wanted: https://github.com/simonw/datasette-paste-table/issues/1

I use it for brainstorming. "Give me 40 ideas for Datasette plugins involving AI" - asking for 40 ideas means that even if the first ten are generic and obvious there will be some interesting ones further down the list.

I used it to generate an OpenAPI schema when I wrote my first ChatGPT plugin, see prompt in https://simonwillison.net/2023/Mar/24/datasette-chatgpt-plug...

It's fantastic for explaining code that I don't understand: just paste it in and it will break down what it's doing, then I can ask follow up questions about specific syntax to get further deeper explanations.

Similar to that, I use it for jargon all the time. I'll even paste in a tweet and say "what did this mean by X?" and it will tell me. It's great for decoding abstracts from academic papers.

It's good for discovering command line tools - it taught me about the macOS "sips" tool a few weeks ago: https://til.simonwillison.net/macos/sips

I have been using GPT-4 to generate i18n files, and it is great. You can see this post to check GPT-4's translation capabilities: https://www.reddit.com/r/visualnovels/comments/11rty62/gpt4_...

I can simply feed in an en.i18n.json file, and it will generate i18n.json files for as many languages as I want. I don't use a specific prompt, but I occasionally include general information about the software in it.

Edit: I do verify the output by translating it back to English using Google translate, but it seems I need to be more careful.

s.gif
And then you let a human check it I hope? It does very well for the top 1% of languages (in terms of text online), but quality quickly degrades where there is less training material.

I asked a speaker of Northern Sámi, a language with not that big corpora available, to comment on GPT-4's translations into her language. She said "The translation is completely incomprehensible. Lots of non-existent and completely incomprehensible words, and the words that are understandable do not fit into the context. Besides, it's the wrong subject, it's Russia's report instead of the UN report etc." Only knowing a tiny bit of the language, I could've easily been fooled by the output.

s.gif
Yeah, it manages to produce intelligible output in Hungarian, but I've given the output to some native Hungarian speakers, and they're constantly telling me that it's making up words or using strange archaic words that they've barely ever heard used in regular speech.
I find ChatGPT most helpful as a "what's that called" tool. A lot of my queries are finding/confirming the right idiom when writing something, or getting a specific name out of a vague description (JavaScript concepts, shell commands, CSS selectors).

Search engines with SEO are so reliant on keywords that it often feels like I'm suggesting answers rather than asking questions - it's so refreshing to be able to just ask again.

s.gif
It's also helpful with explaining acronyms. Something like `What does SEO mean in "Search engines with SEO are so reliant on keywords"`.
s.gif
It's replaced Google for the "what's that called" tool.

Google used to be good at that task, but it's sucked for the last four years or so. Whenever they gave up on search and leaned into sludge content plus ads.

ChatGPT is better than Google ever was anyway.

s.gif
I hope it's going to remain that way, but the realistic cynic in me tells me that using ChatGPT to discover stuff is going to increasingly suck more now that people are going to try to target ChatGPT for SEO.
Since I use ChatGPT regularly, I decided to create my own client. I prefer to avoid third-party services that require privilege escalation like Grammarly and Copilot. I have developed distinct profiles for different tasks, each with its own system prompt and input method. After getting the hang of it, I plan to tweak the parameters as well. Here are some of the profiles:

Explainer: a default, general purpose Q&A. The prompt is "Explain to me like a 3rd grader. Skip prose." I plan to expand this profile to include additional communication styles, including step-by-step explanations, elaboration, and the Socratic method.

Proofreader: I use this profile to edit, simplify, and shorten any text (including this comment). I borrowed this feature from Grammarly Go, and it works by pasting the text and clicking a button.

Developer (in development): this uses a simple editor as input, with features similar to Cody/Copilot, such as adding types, naming things, summarizing, autocomplete, auditing, explaining, fixing, refactoring, and more.

Lastly, I plan to add two more profiles that are more creative and generative: Writer and Designer. They will act as private consultants/partners and assist me in brainstorming and complementing my skills in building websites.

s.gif
Another idea - since I found the explanation on MDN Docs can be too technical sometimes (filled with jargons from the specs), I think beginners like me would find ChatGPT helpful.

Here are some screenshots comparing the two:

- MDN Docs: https://nitter.net/pic/orig/media%2FFsDOMrEaUAAy-Y5.jpg

- GPT-3.5: https://nitter.net/pic/orig/media%2FFsDPHbXagAA7Ky8.jpg

s.gif
Thank you, I intend on stealing your phrase "skip prose", was trying to find a way to force it to exclude this whilst saving tokens.
I am using it to learn programming. I have no technical background but know enough about technology to be able to talk about the problems abstractly. Because my knowledge of the space is not via formal education and training, I have gaps in my knowledge and do not know deeper details about how ideas connect with each other on a deeper level.

GPT allows me to ask questions and provide the right kind of "connecting" bridges between two concepts I was not earlier aware of. It has made recursive forms of learning very easy for me, when I can articulate the "what" but lack a clear understanding of "how".

s.gif
My advice to learn programming is to avoid shortcuts and do the hard things like read the documentation
s.gif
Bro documentation is notoriously terrible in almost all spaces.

Gpt3.5 is like super Google, and gpt-4 is like a polymath in everything intern. Learning has never been easier for me, I'm stoked.

s.gif
I don't think they were asking for advice.

The way dosco189 is using GPT is perfectly fine. They aren't letting GPT do all the work for them, they're letting it explain how concepts relate to each other, something you often will not find in the documentation.

Mostly just asking stuff directly on https://chat.openai.com/chat. Last 8 requests were (all successful):

- Asked it to improve a HN comment I wrote.

- Asked about an idiom I couldn't remember, by saying it in other words.

- Asked it to dumb down some things about options (finance) I didn't understand.

- Asked it if I could use the eBay API to list my purchase history (you can, and it knew how).

- Asked it to generate pretty standard Terms of Service for an app I'm working on.

- Asked it to generate a moderately complex Prisma (ORM) query that I described in natural language.

- Described what I wanted Nginx to do in natural language (e.g. "the index file will be served whenever no file is found") and asked it to output a configuration file.

- Asked it what the "XDG" in "XDG_CONFIG_HOME" stood for.

Also, occasionally ask it to generate shell commands using a CLI I wrote[0].

[0] https://github.com/clevercli/clevercli

s.gif
Completely off-topic, but do you like prisma, and how are you using it (scale, complexity, solo vs team, etc).

I toyed around with it a while back, and it looked potentially awesome, but different enough that I was worried about using it on a work project in case it failed in some use case.

Im so sick of the SQL ORM situation in Typescript, but Prisma might have an answer.

s.gif
Prisma has this one glaring issue: https://github.com/prisma/prisma/issues/10649

Segmentation fault with NodeJS 18. Its about prisma having linked a different version of OpenSSL than NodeJS. A workaround is to use the "binary" engine, meaning you run another process, talk via IPC to that, and that then talks to the DB, which is ridiculously roundabout.

But this issue is finally worked on, so once that is fixed its good.

s.gif
It's good, but verbose, you can't select nested objects by just doing lesson.mapped.sections.course .. etc.. you have have to do something like
    lesson: {
        select: {
            id: true,
            slug: true,
            title: true,
            mapped_sections: {
                select: {
                    id: true,
                    slug: true,
                    section: {
                        select: {
                            id: true,
                            title: true,
                            course: {
                                select: {
                                    id: true,
                                    slug: true,
                                    title: true
s.gif
It's a solo project, CRUD stuff, but having used pretty much all JavaScript/TypeScript ORMs in existence, I can say that Prisma is way ahead of the competition. I would personally recommend it. A traditionally cumbersome and annoying thing about trying a new ORM is that you basically have to re-learn a pseudolanguage for building SQL queries but that hasn't been an issue this time, thanks to ChatGPT. So overall, happy with Prisma and plan to keep using it.
s.gif
Asked it to generate pretty standard Terms of Service for an app I'm working on.

So you're using it generate a legally actionable document. Is this a good idea?

s.gif
Yes. I got a cheap AI lawyer if someone sues me, all good. All jokes aside, the alternative was to not have a "Terms of Service", so fairly sure it's better than nothing.
s.gif
Fairly sure it's better than nothing.

Until it isn't. Live and learn, as they say.

s.gif
It's an arms race. First nobody was reading them, now nobody is writing them.
s.gif
Has there ever been a case of a ToS so badly written that not having one would have been preferable? I'd be curious to hear about that story if it exists.
s.gif
At a certain point in my life, I came to the conclusion that if something is important enough, it generally pays to: (1) either research the matter myself until I was satisfied that I understood the cost/risk tradeoff sufficiently; or (2) if I don't have the time or skills to do that, have the matter reviewed professionally. Both of which are alternatives to "doing nothing".

Employment or other contracts, health decisions, taxes ... that's how I roll.

That's just me, and I'm not you. It may also just be a hobby project or otherwise of negligible consequence. In which case it would seem to fall under the rubric of what generative AI is arguably suitable ("better than nothing").

s.gif
Hiring professionals is often about transferring liabilities to knowledgeable people. I wouldn't want to respond for something done by some automatic tool whose output I don't fully understand.
s.gif
Asked about an idiom I couldn't remember, by saying it in other words.

You inspired me to try it for finding a book I can't quite remember the title of. Unfortunately no joy. Still, for the amusement of anyone who is interested, or anyone who might be able to help me with this book:

Mindcrime: What was that horror novel from way back when, about the time traveling demon hound?

ChatGPT: It's possible that you're thinking of the horror novel "The Hound" by H.P. Lovecraft, which was first published in 1924.

In the story, two grave robbers become obsessed with obtaining a certain artifact from a tomb in Holland. After obtaining the artifact, a small bronze statuette of a hound, they begin to experience strange and terrifying visions. The hound appears to come to life and stalks them through their nightmares.

Later, one of the grave robbers travels to Egypt and discovers that the artifact was made by a sinister and ancient cult, and that the hound is actually a demon summoned from another dimension. The hound then follows the man back to his home in England, where it continues to torment him until he meets a gruesome fate.

It's worth noting that there have been many horror novels and stories featuring time-traveling demon hounds over the years, so if "The Hound" isn't the specific book you're thinking of, there may be another that matches your description.

Mindcrime: No, it was more recent than that. I think from sometime between the late 1980's to mid 1990's. I think the word "hound" might have been in the title.

ChatGPT: Thank you for the additional information! The book you may be thinking of is "The Hound of the Baskervilles" by Arthur Conan Doyle. Although it's not a horror novel, it's a detective story featuring the famous detective Sherlock Holmes and his companion Dr. John Watson investigating a supernatural curse and a demonic hound that is terrorizing the Baskerville family on their estate in Dartmoor.

"The Hound of the Baskervilles" was first published in 1902, but it has been adapted and retold in various forms over the years. There may be a horror novel that is a retelling or adaptation of the original story that you're thinking of, but without more specific details, it's hard to say for sure.

Mindcrime: No, it's much more contemporary than that.

ChatGPT: I apologize for the confusion. Let me try to narrow it down further. Do you remember any other details about the book? For example, do you remember the author's name or any other specific plot points or characters?

Mindcrime: There was some stuff about an airport or airplane near the end.

ChatGPT: Thank you for the additional detail! The book you might be thinking of could be "Watchers" by Dean Koontz.

"Watchers" is a horror novel that was published in 1987. The story follows Travis Cornell, a man who is hiking in the mountains when he stumbles upon a genetically engineered creature known as "The Outsider." The Outsider is a highly intelligent dog with extraordinary abilities that has escaped from a government laboratory.

Cornell takes the dog under his wing, but soon discovers that a ruthless government agent is after the dog and will stop at nothing to retrieve it. The novel reaches its climax at a local airport, where a final showdown takes place between the agent and the dog.

While "Watchers" does not feature a time-traveling demon hound, it does involve a genetically engineered dog that could possibly fit your description.

Mindcrime: No, it wasn't by Dean Koontz. And time travel was definitely a plot element.

ChatGPT: Thank you for letting me know. I apologize for not being able to find the book you are thinking of. Without more specific details, it's difficult to narrow down the possibilities. If you remember any other details about the book, please don't hesitate to let me know and I'll do my best to help you identify it.

s.gif
Have you tried asking a story-identification question on https://scifi.stackexchange.com/? I've seen quite a few questions on there get answered with even the most obscure books or movies, and your question should fit right in on that site.
I use it a lot for Linux administration, troubleshooting, and scripting as well as some programming. I've only recently started using GPT4 and the API so I've only been using the chat interface so far. Examples of some stuff I've asked it just today:

- Asked which config files handled sleep mode when lid is closed and kept fixing it and asking for more possible locations until it fixed my issue (going into sleep during boot before user login if laptop is closed even with externals)

- Asked for a list of KDE config files I should track in git

- Copy pasted a list of ~/.config files and directories and asked ChatGpt for descriptions. Used those for commit messages to build up the initial dotfiles repo for KDE plasma and a whole bunch of other stuff that would have taken hours

- Asked it how to write a bunch of journalctl queries

- Queried it about some log lines I've been seeing in journalctl and had it guide me through troubleshooting using some terminal commands it came up with. Turned out to be a problem with nvidia-drm.modeset in kernel configs

- Asked it to guide me through a dozen awesomeWM customizations ranging from new code from text descriptions to edit suggestions to fix bugs in behavior I've described. Stuff like custom if/else trees handling setup specific scenarios (logic for clamshell open and closed with one or more externals connected by ID) are a breeze.

- Asked it for tips on how to use awesomeWM best and which keybindings to customize

- Code up the message passing from a firefox extension to a native Rust CLI (like the 1password extension) that uses remoc to pass through messages from all tabs to a single daemon over platform specific interprocess communication

AKA Google is fucked.

s.gif
>Google is fucked

The last year or two have proved to me that they deserve it. Their search engine is utter dogshit now. Ultra commercialised, full of ads, so hard to find a result that is actually what I searched for. Guess that's what Google level greed will do.

s.gif
It's amazing for system administration tasks like this. A few weeks ago I used it to install about 25 SSDs in the course of an hour. I was able to build up a one liner that let me figure out which discs were recently added and had not been formatted or mounted anywhere. It helped me do this really, really fast.
For me the main benefit is what I would describe as 'rubber ducking discovery', using it as a learning tool I am able to try out different problem domains that are completely out of my area, things like:

- Web scrapping

- Web crawler

- Trying out Rust

- Trying out pytorch and training models for computer vision problems

This is all stuff that would have taken me several weekends of reading docs and geting familiar with libraries.

Now it suddenly becomes a few hours because the discovery of the right tool is semi-instant and although the code it provides is often wrong it generally puts you in the right direction.

My intuition is that it's a lot better at outputting viable Python and JS code than other languages, perhaps there is bias in how much data of other languages it has access to.

s.gif
Rubber ducking discovery is a very good description of how I use it too. I don’t necessarily care if it’s 100% right, but it feels like it short circuits my “time to solution” by a significant percentage over hunting around through endless blog posts and SO answers. ChatGPT fills the gap between an idea and getting the LSP to tell me what will actually work.

Earlier today it totally hallucinated a built-in function, but when I started typing it out, the LSP kicked in and I could tell what the GPT was “thinking”. Before that I didn’t even know what the name of the function was that would do what I want, but it was close enough that it stopped me from having to parse the reference docs on my own.

I use it to help find answers more quickly than googling and scrolling through docs.

The problem is it lies so much. Makes stuff up. It is therefore only good as a hint machine, to give you solutions you can try with a sceptical eye.

I’ve been using it to generate bash scripts because I don’t know bash scripting, and also have it generate regexes for me to search for code references in a programming language where the “find all references” functionality don’ doesn’t work quite well.
s.gif
I’m the same way, I “know” bash but I’m not fluent in it, always have to look up how to do very basic things like looping. But for the simple things I need bash for ChatGPT does great as a time saver.
s.gif
I didn't know bash scripting and then I wrote some scripts and then I knew bash scripting and use it for lots of stuff. Has ChatGPT resulted in learning by doing for you?
s.gif
I guess I use ChatGPT similarly for Ruby and for Rust, but with the only difference being that in my mind I intend to “learn” those two.
s.gif
Off topic: the similarity of yours and the parent's usernames was a funny little coincidence.
The past week I used GPT for about 80% of my commit messages. I put it in a terminal command so all I type is 'commit' and that's equivalent to: git add . && git commit -m "message" && git push.

The message is generated automatically via the GPT API.

I made it public in case anyone else wants to try/use/fork it:

https://github.com/stevecondylios/gpt-generated-commit-messa...

It's very convenient for README and docs changes; small changes whose commit message really doesn't matter, saving a bit of time and mental energy and allowing you to stay on task.

s.gif
You and I have very different ideas of what makes a good commit message
s.gif
I’m interested to hear more. Can you give an example or two.
s.gif
My take: Messages like this...

> Updated Gemfile.lock and added new dependencies (coderay, concurrent-… …ruby, crass, date, pry, method_source, public_suffix, puma, nio4r) and updated existing dependencies (rack-test, regexp_parser, xpath, nokogiri, racc, pg). Also added new files for a user authentication feature.

Describe the "what" but not the "why". Even "user auth wip" would be helpful. It's like having autogenerated code comments like:

// initialize variable i for later use in a loop

int i = 3;

s.gif
GPT can struggle to see the forest from the trees. For example, if generating a dozen or so files with `rails g scaffold post` a GPT-generated commit message may simply list all the individual items "Created new post views, new post controller, new post model.. etc" when "Generated a posts scaffold" would have been more general and useful message.

GPT sometimes 'sees' the bigger picture though, for example when I commit a new rails app, instead of listing the individual files, it instead generated: "Added all files for a new Rails application, including controllers, models, views, tests, and configuration files." It could have said "new rails app", but it wasn't too ineloquent.

For being productive? Not at all for the most part. I haven't really found anything that I do that I can punt to ChatGPT. I guess I could have used it to help me write this response, but what would have been the point?

The little bit of time I spend messing with it (and Bard now that I have access) is mostly just for fun; trying different jailbreaks and creating ridiculous scenarios and seeing what kind of reaction I can get get from the bot.

To be fair, the one time I did try ChatGPT for something productive it was kinda helpful. I asked it to generate some Apache mod_rewrite rules for me for a particular scenario I was working on. What it generated wasn't exactly what I needed, but that could have been down to me not prompting it as well as I might have. Still, even with having to hand-tweak the output a bit it probably did save me some time, but not a massive amount.

All of that said, I'm sure the day is coming when I find some uses that fit my workflows, but I spend most of my time reading, researching, and experimenting with new stuff (but mostly using programming languages I already know well). So there just aren't a lot of obvious places to insert ChatGPT / Bard right now.

s.gif
I'm pretty much the same. I don't find significant productivity gains from using it - maybe because I have a specific way of doing things already. For instance, I know it's better for me to understand React/whatever framework rather than letting chatgpt write the react state/reducers/etc and all that stuff.

I can definitely use it for emails and have used to simplify exec emails in my company but that's just it.

I recently asked it to convert some bash code to golang (which I'm just beginning to learn) and it was really useful to give the skeleton with proper syntax.
I noticed my productivity with GPT was closely tied to how quickly I could access it. For example, Copilot is so useful to me because it's directly integrated into the browser. So I decided to build a Spotlight Search-esque interface to GPT that I could access anywhere [1]. It's been useful in answering quick questions or drafting documents.

[1] https://github.com/JinayJain/gpt-anywhere

I use GPT4 to workshop different ML ideas. I ask it to combine different ideas from the literature, for example.

"How can I use denoising diffusion with this approach?" etc.

It's great to be in a critical mindset, because being creative and critical at the same time is much harder than being critical.

I use the socratic method and really dig in with it.

One off extractions from semi structured text like an email or paragraphs from a webpage. Sure, I could spend 40 seconds coming up with a regex that I run to reformat how I want it, or I can just say it in plain terms. And if I need something a little more involved it has my back too: "Extract the domains from these urls as one column and give a user friendly name for the website in the second column and give a short description of what the purpose of the site is in the third column"
Useful Prompts for work. 1. Explain this code - reduces time spent learning a new code base 2. What do these changes do - reduces time reviewing PRs, I paste in the diffs 3. Making style changes - CSS is just meticulous for me

I also use it for foreign language learning. I'll write a paragraph in french and ask chatGPT to find the errors and list the grammar concepts that they relate to. ChatGPT has basically replaced my text books and explains concepts better than my professor.

A while back, I wrote a simple CLI-wrapper around OpenAI's API that I'm using daily (https://github.com/maxvfischer/askai). I use it as an addition to Stackoverflow to ask quick programming-related question straight in the terminal.
I have been using it to reword messages that are displayed in an UI. I write out the information that the message needs to convey and ask it to generate a simple and concise piece of text that does the job.

A month or so ago, I tried asking it some really specific questions about the Linux kernel and it did not generate anything useful. I assume it must have gotten a lot better now with the larger model.

I asked him to create new plausible scenarios for an X-Ray training program I'm writing.

It was not THAT plausible, but it filled the placeholders I have in my UI quite well and he actually used other words related to anatomy than thos I provided in an example.

Also, I asked him to output the resulting scenarios according to a vague json schema and he complied. Copy/pasted it in my JS front and it went without a hitch.

s.gif
I found it interesting you're starting anthropomorphizing it.
s.gif
Not GP, but my native language doesn't have an "it" pronoun, just he/she, so my use of "it" is always inconsistent. May be the same situation.
I get many ideas which requires coding, launching and PoC. I wasn't able to do as I don't have time or work force with me. i believe I can do it now. Will use the GPT or copilot-x and start launching fast and prototype ideas.
I'm having it generate code of course (it hallucinated the ability to write a minecraft mod for bedrock in C#). Other than that, it wrote a Discord bot for me.

I have also used it to make data tables comparing cars. Silly stuff like TCO$ per kW per kg. I found the best way to make a spreadsheet was to ask it to generate A, B, C column headings and numbered row headings. Then asking it to emit the excel/gsheet formulas instead of calculating the values. You have to double-check everything, but pasting the table into gsheets and just using the formulas bypasses any numerical issues. Excited to use this approach to tackle another problem (and get Wolfram integration set up, that looks amazing)

Summarizing and explaining science concepts to interested kids. It's good at simplifying language. I was trying to put a young kid on Wikipedia, even the simple english version uses too much jargon. This can break down the jargon, answer questions about specific details for clarification, and even calculate real world examples. It's a pretty neat science teaching tool.

Just used it to plan a classy bachelor party. Full itinerary, transit times, estimated costs, it provided suggestions on places to go based on my suggestions, really impressive.

s.gif
I'd be interested to see your prompts for the spreadsheets.
It’s been really helpful for picking up a new programming language. Particularly around helping me understand conventions in a language I’m not familiar with.

GPT-4 has been great at breaking down complex regexes that I am too lazy to parse out in the moment.

I’m also finding it helpful as a creative partner on naming things. Something I feel like I spend a lot of time noodling on. Like: creative names for a data warehouse that are surfing related (BoardRoomData lol).

It's pretty good for brainstorming ideas and it can generate a mindmap in markdown format that you can visualise in tools such as Markmap. E.g. with a prompt like this (works best in v4):

"I'm brainstorming a business venture that is a cross between a boutique clothing store and DIY sewing classes. It’s called Style & Stitch. You can shop for clothes and learn to make your own. Please help me brainstorm some ideas for as a mind map (using Markmap markdown)."

You can ask it to combine ideas from different domains together for extra creativity (above example is output of one such attempt). Often it's not that creative on first attempt but if you prompt it with something like "how about some zanier ideas" it will do better.

I also like to prompt it with "output XYZ .. as a table with columns for X, Y, Z" or similar to get a nice markdown table of its output where it makes sense.

> For generating reports : I dumped a few SQL definitions in the prompt, then started asking questions like: "Write an SQL statement to check how many users signed up last week?".

Then used these commands in Postgres.

> For proofreading, adding emojis and changing tones : ChatGPT doesn't have a personality. It's prose generation is not bold. So I write the text myself, and ask Siri to proofread it, add emojis etc.

I call ChatGPT Siri. It doesn't seem to mind. It never acknowledges it though.

> For repetitive typing tasks ⌨: Convert Markdown text to org mode, add a quote around all variable names, add a tab in every closure.

> For writing code : This is a hit or a miss, but I have realized that Chat GPT points me to correct APIs, or variables that I can look up on Google

> For learning paths : I'm learning Emacs and cooking. I tell GPT my current state, things that I know, and the place where I want to be. It fills in the next steps. Not ideal, but I hope this improves. This will make AI an excellent teacher.

> For generating content ideas : It kills the writer's block. Chat GPT generates enough good ideas for me to pick and write tweets on. But I refrain from using GPT content directly because it's bland.

I find GPT very helpful for trying to understand the rationale behind decisions from a bit outside my field. These are cases that probably don't take any great insight for a practitioner, but which can be hard to arrive at for an outsider.

Being able to have a quick back and forth can keep me on track and productive instead of falling down a rabbit hole of research. An example might be this discussion with Bing from earlier today

https://docs.google.com/document/d/1jCZICl8wrsk8b5OTM9xpFbL5...

* Give me a bash one-liner to generate a secure random string

* I'm using a NextJS middleware function by exporting it from my page component like this ... Here is the middleware source ... But I get this error ...

* How can I tell if my site is running in production in NextJS?

* NextJS says localStorage is not defined. What should I do?

* Please adjust this Prisma schema so that there is an index on github_id

* How do I configure eslint to accept switch style that looks like ...

* Write hex bytes that represent a 16x16 bmp image favicon (didn't work lol)

* Please write me a React reducer and React context that holds state for a users's auth token and also synchronized that token into `localStorage` with the key `__auth_token`.

* How do I disable the rule "@next/next/no-img-element" on the next line?

* Here's my current page ... What changes should I made so that the footer is at the bottom of the screen when there isn't enough content to push it down to the bottom of the page. But if there is a lot of content it will sit right below the end of the content and not on the bottom of the screen.

Generally it works really well!

s.gif
This is very similar to my usage. I'm awful about wiping from my memory every implementation detail unless I work in something every day.

awk and sed are great examples, I find them critically important just infrequently enough I barely have any idea how to use them.

or the once in 2 years I need to do something complex in a spreadsheet, a natural language description of the problem is easier for me to remember than some sequence of vlookup

s.gif
I had a script with a huge chunk of spaghetti awk and sed I wrote years ago and couldn’t remember how it worked. I pasted it into ChatGPT and asked it to explain it to me and then make some small edits that changed the output. It did a remarkably good job.
s.gif
I hadn't even thought of having it explain my own garbage to me, that's great.

For my usage patterns of sed and awk, it's not complex, but it's usually that if it comes to me using them, a series of bad and urgent things has occurred and I don't have time to grow a beard and ponder unix zen

I've found it really useful for explaining mathematical concepts using the notation and terminology I'm comfortable with.

For example, GPT-4 gave me the first explanation of backwards-input and backwards-filter convolutions that I've been able to understand. This was because I was able to start it off by explaining how I understand forward convolutions, and it explained the bw convs in the same way.

Astonishingly good.

I use it for coding, with mixed results, for getting me going in the right direction.

I take everything it says with a grain of salt. Through some original queries about some people, I came to realize that a GPT is a stochastic parrot optimized for plausibility. Given a Venn diagram of plausible and reality, they have a large overlap. Our ideas of plausibility are informed by common repetitive observations of reality. So GPTs almost always sound truthful, and much of what they generate overlaps with truth; sometimes it does some interesting synthesizing.

For code hints in particular, which I hit it up for 2 or 3 times a day on whim, I find that the domain matters. Asking for how to do things in Jetpack Compose, I get all kinds of weird answers. Compose is a highly volatile space, GPT will synthesize function signature that are amalgamations with 3 year old stuff and newer. It helps refine my internet searches. Thank heavens for the kotlin/compose slack channels.

When I ask GPT for mainstream Python stuff, it does pretty well. Recently asking for help with parsing ical formats, it nailed a number of questions. As I moved into handling recurring events, it started getting weirder. It wrote plausible looking code with methods that didn’t exist but were hybrids of others. It missed letting me know there was a library specifically for recurring iCal events. When it came to dealing with isoweeks and timezones, it got all the right modules and functions, but put them together in weird ways. Like a blind person assembling a puzzle.

C code it does decently well. Elixir, it struggles with. Many of its elixir answers are very dated.

By and all, I treat it as a way of generating prompts/idea for me to pursue. It’s like having an eidetic four year old at your disposal whose read all of the code ever and can mention all kinds of things from association, but doesn’t really know. Or like interviewing someone who’s crammed on a subject, but hasn’t really experienced what they’re talking about.

For general troubleshooting, it's helpful to ask things like "How do I troubleshoot a slow OS X finder?" How do I migrate mail messages and calendar items from one Office 365 mailbox to a new Office 365 mailbox with different email address in the same tenant? Can I use group policy to configure certificate based authentication for Microsoft Outlook? Where do I start when trying to renew and replace an api certificate on aws?

Also, for leisure I've been using it for gaming. For example, "Do you know about the survival game Rust (yes)?" Then follow up questions, how many beancan grenades to break a metal door? How long will it take to craft 500 sulfur worth of 9mm ammo? I've learned quite a lot from it but when it said I can use a grappling hook to scale a high external stone wall, I noticed this flaw. There is no grappling hook in prod/vanilla Rust, so I told it so, and it corrected itself to say it may only be available on modded servers.

Also I ask it a lot of medical questions, treatments, symptoms, long term outlooks, over the counter treatments etc.

I try to use it as a tutor while studying. When I run into something that I don't understand then I start asking it questions, often times asking it to "explain like I'm 5". Overall it's been really helpful. Now I don't have to rely on search engine results (which is nearly an entire page full of ads at this point). I also don't have to spend time posting questions on the Stack Exchange sites, worrying about the nitty gritty details of how I phrased the question. With ChatGPT I can ask really specific questions right as they come up, and instantly get an answer.

I have noticed that it gives me wrong answers quite often. This can be a problem if what I'm asking is too far out of my depth. My strategy for dealing with the potential false information is to 1) Be suspicious of any answer it gives me. 2) Ask it, "Are you sure about that?" (lol) 3) Ask questions that tie into things that I do know, so that it's easier to detect potential wrong answers. I think that the process of being suspicious and critical of the answers also helps me learn, since I'm forced to basically try and prove why it is right or wrong after I get an answer.

So, overall I'm using it to enhance my learning rather than, "do work" for me.

One convenient usecase for me is to generate model classes from a sample JSON document: "please create me a java class using lombok from this JSON: {...}"

Also vice versa: "create a JSON document with sample data from this java model class: "public class Person {String name; ...}"

Last night, I used GPT-4 to help me design a stereo matrix mixer circuit.

First, I used it to help me make sense of the datasheet for a crosspoint matrix IC, and when "we" determined that the IC I was planning to use didn't support some of the functions that were critical to my design goals, it suggested a number of alternative ICs which might work, along with listing potential tradeoffs that might impact my design.

In the process of doing this, I had it make suggestions on how I could use various combinations of resistors and capacitors to buffer (clean up noise) that might impact my signal. At one point, it generated a schematic so that I could see what it was talking about, and it was correct.

At one point, it imagined some functionality on an IC that does not exist, and when I asked it "on a scale of 1 to 11, how confident are you that the AD75019 supports EVR?" (essentially, variable resistance across all 256 crosspoints) and it went back to the datasheet to correct itself, saying "on a scale of 1 to 11, I am 100% confident that it does not support EVR", which is about as sassy as you can get while still being obsequiously polite.

During the entire conversation, it not only suggested that I verify our conclusions with a qualified EE, but kept recommending that I check out existing commercial products. Not because it didn't understand I was building a device, but because it kept telling me that purchasing an existing product would be less painful than the time, expense and difficulty of building my own.

I believe that it was (strongly) implying that my time is valuable and that I should stop while I'm ahead. I ended up ordering an Erica Synths Matrix Mixer today, though I still might build my dream device. I call that productive.

s.gif
I think this is interesting, because it points out the ways in which a future GPT might be subtly trained to embed advertising. "It looks like you're doing X; have you considered a commercial solution, such as Y or Z?".

While you did wind up with a device that probably will suit your need, you also wound up out of a potentially fun hobby project. Not everyone will call that a win :)

s.gif
It actually worries me what's going to happen to tools like GPT when it starts being influenced by commercial interests and manipulating people. What's going to happen when a sports drink manufacturer pays GPT to never ever ever mention water when people are asking about dehydration, but instead to pitch the sports drink? Are we gonna block all kinds of knowledge just because it interferes with some corporation making a profit? What happens when GPT starts promoting a political candidate or demoting another? Who controls which candidates GPT prefers? What are we doing to protect GPT from this kind of outcome?
s.gif
While I don't think that you're wrong about being concerned, I would suggest that we currently exist in a society where teens are held in rapture by TikTok, Google serves custom results full of ads and Amazon is only barely trying to purge fake reviews. Kelly Anne Conway can describe lies as "alternative facts" on television and be lauded for being "good" at her "job".

We're already in the storm.

s.gif
I think I once got it to get out of "buy" mode by lying to it and telling it I'm in a sanctioned country. Maybe it's a trick that could work for you :)
1. Help with programming architecture decisions. I'm providing high level functionality overviews and asking it to design the right system given constraints. Sometimes it's off, but it only requires a few tweaks here and there to get it right - and I usually have the intuition / experience to make those tweaks quickly.

2. Write entire React component. This exercise is actually helping me be more modular in how I design components because if I'm typing out a page long description of the component, I know I should be breaking it down into smaller components.

I also give it name + description of existing component (if it's necessary to build this one), and it figures out exactly how to use it. E.g. assume the following components exist: [Editor ({ content: string, onChange: ()... }), ...]

3. Learning about anything new. My first instinct is to engage with GPT, and only after that Google to find more detailed, opinionated information. This is great for topics that are more objective. I find GPT to be horrible for subjective / less clear questions like: "What is the best career move if my goals are __, and I'm in this situation: ___" - because it will regurgitate the average answer - but not the best one because the mainstream answer to this question is likely more wrong than a contrarian, but more true, answer.

Tangentially related but... There's an issue related to training models with data that doesn't have a license allowing it. I don't know if it'll hold up in court but here's my prediction: we'll see an open source license protecting code welcoming using the repository to be used to train future models IFF the models are then made public. And private models are going to be given the finger by the license and won't be allowed to use the repository as training data.

Funnily enough ChatGPT 4 can probably be used to help enhance commonly used open source licenses to add that clause to the license.

I'm not saying I totally root for that (I kinda do): I'm saying we'll see such a license at some point.

I think all of our stories will look the same, we all ask about small problems, to get an answer as accurate as possible. It did help me in something else. If someone has a weight problem, you can ask ChatGTP to plan you an eating schedule. I've lost 6kg in 2 months now by eating healthy food. If someone is interested what questions I've asked, I've asked ChatGTP to create me a weekly meal schedule, where for my current weight, activity, height, I will be able to lose 0.5kg per week.
s.gif
That's pretty interesting. I just started using it for tracking my calories. I'm usually overwhelmed by looking up all the calories in all the components of my meal, but if I can just say "I cooked a chicken breast in olive oil, boiled a cup of rice, and put four white button mushrooms in it. How many calories is that?", it saves me a lot of time and effort.
s.gif
Is it _right_ though? It's not great at arithmetic, nor is it necessarily reliable with the number of calories in any particular ingredient; while the Wolfram plugin promises to make that a bit better, it's still feels like it might be a crapshoot.
I use it to write boring German letters that don't need a human touch. It saves me the trouble of finding the right words in a foreign language. It made dealing with the bureaucracy a lot less dreadful.

I think it might be good at answering "why" questions since Google completely gave up on that.

GPT is helping me write a D&D adventure. I'm not very creative so it helps me. If I don't like elements of the story it's very responsive to changing elements.

It's surprisingly good at remembering earlier details. Occassionally it will forget things but another prompt tends to fix that.

For example I'm about to plan Chapter 4. The broad story so far is that the adventurers are doing some tasks for a friendly NPC and during that will slowly uncover clues to the BBEG, in this case a cabal of vampires who plan to plunge the world into darkness.

Prompting it with:

    Let's plan Chapter 4. Can you produce a short table of suggested titles and themes?
It produced:
    Title Theme
    The Vampire's Lair The adventurers infiltrate the vampire lord's castle and uncover his dark secrets.
    A Deadly Dance The adventurers must navigate a treacherous ballroom filled with traps and minions.
    Into the Abyss The adventurers descend into the catacombs beneath the castle and face unspeakable horrors.
    The Final Showdown The adventurers confront the vampire lord and his minions in a dramatic battle for the fate of the realm.
    Beneath the Moonlight The adventurers discover a hidden shrine dedicated to the vampire lord and must unravel its mysteries.
    Blood Magic The adventurers face off against powerful necromancers and witches who serve the vampire lord.
    The Betrayal The adventurers must confront a trusted ally who has been working with the vampire lord all along.
In table form.

Now it's got a little confused, because in chapter 3 the gang already infiltrated the castle and confronted the vampire lord, but another prompt will fix that.

Prompting:

    In Chapter 3, the adventurers already infiltrated the castle and dealt with the vampire lord. Can you reproduce that table assuming the vampire lord is no longer a threat for now (even if he remains at large)?
Produces:
    Sure, here are some possible titles and themes for Chapter 4, assuming that the vampire lord has been defeated (at least for now):

    Title Theme
    A Web of Deceit The adventurers uncover a plot by the cabal to take control of a powerful artifact or location.
    The Hunter Hunted The adventurers are pursued by an assassin or bounty hunter hired by the cabal.
    The Fallen Hero The adventurers discover that a beloved hero or figurehead is secretly working with the cabal.
    An Ancient Evil The adventurers stumble upon an ancient artifact or tomb that holds the key to defeating the cabal.
    The Betrayer's Gambit The adventurers are double-crossed by a trusted ally who has been secretly working with the cabal.
    A Race Against Time The adventurers must complete a series of challenges or tasks before the cabal can enact their plans.
    The Final Confrontation The adventurers engage in a final showdown with the cabal's leader or their most powerful agent.
(again in table form)

Which are great suggestions, now I can pick one and ask it to write the next chapter based on that, at which point we can flesh out more details.

Various things:

- Non-technical explanations. Useful for the pointy-haired boss. And his boss as well.

- Stack Overflow (but on steroids).

- Summarizing long-form articles my friends send me.

- Generating rudimentary programs/scripts I'm too lazy to write on my own.

- Tutorial-style resources for unfamiliar technology (like writing CRDs in k8s).

- Generated a working Makefile.

- Sometimes, I'll take existing small programs in Go and have them rewritten in another language. It's just fucking cool to watch.

- Rudimentary translations from English to French.

Can't use CoPilot for work yet since, well, they can (and will) upload proprietary IP. But for everything else in life, the productivity gain has been enormous.

I recently had a relative staying over who was asking for help with some arduino code. He needed his chicken incubator to read from a temp sensor and activate a heat source when it was out of range. I thought it was a good opportunity to show him ChatGPT.

Description of the problem, some specifics on equipment, it bangs out a solid first pass to the problem that might require some tweaks. Then it occurred to me that it had not only written the code, it had set the temperature threshold to a sensible default without my having to separately look up that information.

s.gif
please send your input, stories are whatever
Last week I used Dalle to make a logo for a side project and GPT to write some Javascript for it (I’m mostly BE these days).
To me it is most useful when I have a singular issue with a bounded piece of code. Like a method, a single algorithm. A nasty SQL query too sometimes.

I don’t use if often (once a day maybe) but when I do use on problems I know it is good at I get really good answers. Oneliners to things my code is taking 20 loc to do, or some simplification I overlooked. But I have to be careful about what I give it.

I know I am better with architectural issues, but I suck at puzzle-like algorithmic problems. Don’t like them either, feels like a machine should do that and now it does.

Lots of tasks that I understand, but where I'm not familiar enough with the details to be productive. Anything that's easier for me to read than to write. Anything where I'd otherwise be stumbling around trying to build a template for myself.

Here's an example capturing a session from earlier this week:

https://twitter.com/JoshCMandel/status/1638723030154203138

I've mostly been using it to write shell scripts, or to answer specific/hard to Google questions about various libraries/tools.

Sometimes I use it to help me come up with names for projects/classes, sometimes I use it for debugging help (X doesn't work, why not?)

I think I could get a lot more out of it if I was more creative. It's an incredibly valuable tool already (on a similar level as intellij for me)

It's super useful for working with terrible APIs such as Shopify. Since it's trained on programs where people have figured it out from trial and error, it saves me a lot of time.
I’m obligated to pick up a new bioinformatics DSL and have been asking GPT4 to translate my current code (bash, go, python) into this language. It is not perfect but it gets me close to what I need, with some editing.

Sometimes I ask it to make music: https://github.com/carbocation/jinglebells

It's helping with fast explanations for pretty much anything.

But the drawback is... it enables lazyness for the sake of "productivity", developer quality might significantly drop by spending less time doin propper research on their subject, and on top of that it's output is not necessarily correct but many will find it reliable again out of lazyness or ignorance.

I use it to optimize my Python + Pandas code. Dump some code in and say “Can you rewrite this code to be faster”. It even gives explanations as to why it’s making those changes.

Another one I use it for is saying “Rewrite this code to run on multiple cores”. Really saves me a lot of Googling time as these are things I want, but I don’t find much pleasure in actually writing code.

I’ve also used it to generate some proof ideas while I’m going through exercises in Baby Rudin. Or to check a proof I’ve come up with if it makes sense.

I'm a writer (copy, social, all-purpose) at a startup. I've been using GPT4 via Lex.page to generate crappy first drafts which I then improve and expand. It can speed up the process quite a bit, but needs handholding and someone to add in personality.
s.gif
What's lex.page? The landing page doesn't give any information, just requires me to login using Google.
I've been using it as an aid in my writing process. Essentially, I can now pour my thoughts onto paper, complete with errors and poor word choices. With just a click, GPT transforms it into a presentable version.

Basically I created a small app to streamline the workflow. https://henshu.ai

s.gif
This is cool. What kind of prompts are you running in the background to improve the prose? How can I direct the prose in a certain way - say if I wanted my text to be official or funny?

We are building a GPT integration as well that helps users analyze reviews. Check out our demo https://show.nnext.ai/

s.gif
You can ask it to rewrite the given prose and make it funnier. If the given text is already kinda funny it should spice it up. If it doesnt have much material, gpt may likely hallucinate some material to get it to be funny
I've been using it as a really specific codegen tool. I give it a chunk of a Swagger doc and ask for a TypeScript React hook that validates the API response using zod, and it gets it right pretty much every time. It's nothing I couldn't do myself, but it does it faster.
Is there a big difference between chatgpt and chatgptplus? I use chatgpt for routine things every day (some basic word smithing, looking up how to use libraries, etc) and it is already quite good. What does the 20/mo get me that I don't already get with the free version?
s.gif
I think the regular one only has the standard model. And when I used the regular version, it was often busy so I could not log in.
s.gif
Gpt-4 and plugins that allow it to have fresh data off from live sources
I am using it to write documentation for the product that I'm building. It's an app builder and requires extensive documentation. I spit out thoughts on to ChatGPT and ask it to structure those thoughts into a structure and complete sentences. It would take me 3x time to structure it myself. But I wonder if this means I'm getting lazy?
I tried out the wxWidgets library recently and used GPT to generate examples on how to do things. Thought it worked excellently and got 99% of the code right.

In the past, I'd have used forums to find examples. There were still some forum posts about wxWidgets on Google, but I got the impression they were hiding most. Either way, GPT had them on demand.

Bash scripting. I've been doing it intermittently for over 20 years but my tool set knowledge is limited. ChatGPT often shows me new ways of doing things in Bash that I'd never have thought of. It's downright brilliant.
It's been a real mixed bag for me. The other day I tried to get it (GPT-4) to generate shader code to create a ray with a pannini projection, and it failed over and over and over again. No amount of reprompting or nudging got it anywhere close to fixing the problem.

On the other hand, it can explain certain complicated concepts extremely easily. I like asking it questions when I just want a general answer as opposed to something that must work.

I’ve tried. Recently I’ve been using to try to improve my resume and asking it for suggestions and information on certain technical subjects related to my career and current situation.

At the end of the day, none of the output it’s offered has necessarily impressed me, though some of the ideas from that output has influenced me an how I write my resume.

I also tried to see if could generate come code similar to something I used in a recent side project. It utterly failed to produced correct code, but perhaps the gpt4 model would do better. So far I’ve been using it as search engine as I will admit, it’s done better than google and friends at giving me the information I ask for,

It easily bridges over sections of code and documentation that are otherwise tedious to produce. I can just write a dense rambling explanation, and it'll decompress that into a piece of text or code that includes all the well-known bits that I skipped in my description.

Also fun is just to write out a hypothetical library idea, and walk through a whole hypothetical development process to validate it, steer it in alternative directions, and find the more challenging points for improvement.

I have a lot of experience in Operations / debugging of web apps, and the plumbing around them, and only a little bit when it comes to writing software. I'm finding chatGPT to be very useful to me as someone who basically knows what needs to be be done to make something but doesn't know the design patterns well.
A while back, I mentioned in a thread that I have found ChatGPT to be quite useful for correcting grammar and spelling errors (https://news.ycombinator.com/item?id=34781888).

Later, when they released their API, I developed a CLI tool for this purpose (https://github.com/pncnmnp/grammatical). Note that it is not flawless, but it works well. It has improved my writing productivity, both for blogging and emails.

s.gif
I like your prompts for your script, I may integrate it into my workflow too. I have found that when I am editing my work, it is helpful to have the original sentence and the suggested sentence one after another. While this takes away the paragraph form, it clearly helps you tease out the "improvements" to either accept or reject. I also have GPT number the sentences. And then when I am done, I say - something like - "please assemble back into paragraph form, all revised sentences are accepted except numbers 5, 12." And then it reassembles the paragraph(s) with the revised corrections. I use ChatGPT (4) for really long stuff though... might not work with Turbo3.5.
s.gif
That is an excellent idea! Perhaps someone should consider developing a smart diff tool for this purpose.

To be honest, my work style can be a bit lazy at times, and this may reflect in my approach. My tool simply copies any changes directly to the clipboard. I then replace them with the original text and make adjustments on the same page. However, for more complex workflows, your approach is awesome. It could potentially transform the process into something similar to Grammarly.

Edit: Have you noticed any differences in terms of quality between GPT-4 and Turbo3.5 for this particular task?

s.gif
Oh yes, GPT-4 is much more compliant when asked to revise. GPT-3 sticks to it's guns when it thinks it is correct.
s.gif
Ah, that's interesting! It could be related to the improvements they seem to have made in the area of "overreliance". According to OpenAI's paper (https://arxiv.org/pdf/2303.08774.pdf):

> Overreliance occurs when users excessively trust and depend on the model, potentially leading to unnoticed mistakes and inadequate oversight.

> At the model-level we’ve also made changes to address the risks of both overreliance and underreliance. Weve found that GPT-4 exhibits enhanced steerability which allows it to better infer users intentions without extensive prompt tuning.

> To tackle overreliance, we’ve refined the model’s refusal behavior, making it more stringent in rejecting requests that go against our content policy, while being more open to requests it can safely fulfill. One objective here is to discourage users from disregarding the model’s refusals.

> However, it’s worth noting that GPT-4 still displays a tendency to hedge in its responses.

s.gif
This is certainly an interesting tool. Also, it seems that they offer a great API - https://languagetool.org/http-api/.

However, it appears that the self-hosting option only gives access to the basic version (https://news.ycombinator.com/item?id=32238705), which is still impressive, but the premium version supposedly has better grammar and style features.

Honestly, ChatGPT's $0.002 per 1k tokens is quite tempting for me. Even after hundreds of queries, my monthly usage is less than 50 cents.

Have it write windows bat files to automate my computer use. Some, would have taken hours to write but chatgpt delivers in a few seconds. I ask and it delivers. It's not always right but it does not hurt to ask.

I'm now looking into other areas where it can help me automate easy but tedious tasks.

Some people still doubt its usefulness. I don't.

I was trying to figure out what an uncommented piece of code left by an old vendor actually did and so I asked ChatGPT. It instantly took me line by line through the code and saved me probably 10 to 15 minutes.
Email rewrites, translation between English and French, asking some very old functions and structs from the 1991 version of ls.c, teaching it some RPG rules and then play text games, generating boilerplate code for work and side projects.

So far only using the free version and experience is good. I might buy the plus subscription but so far lacking any hard requirements. I particularly hope it gets better teaching me system programming as sometimes it BS.

It's actually too slow at the moment. I find myself asking it something and then opening another tab and doing a conventional search. Often, I'll get the answer faster via conventional search.

At this point I'd prioritize speed over new capability.

I've asked ChatGPT for solutions to coding problems I haven't encountered before, as well as error messages that aren't immediately obvious. This augments Google for me.

I'm also working on a UX for AI to make people more productive when creating things: https://inventai.xyz. Subscriptions are nearly done, which are needed to actually generate content.

I use it to write tons of boilerplate code in random flavour of the week languages I am dealing with at work or on hobby projects. e.g. nginx/ansible/powershell/golang etc.. weird codebases your thrown back into a few years later. I know what I want, I just forget the exact syntax of how to get it done. Sometimes its wrong and I have to correct it, but half of the battle is knowing when its bullshitting.
With the understanding that it's only trained up to Sep 2021, I'm using it to spot check for any libraries that have flown under my radar, or any other methods of doing work that I usually do, but in a different fashion.

Do I always get stuff that I can apply? No, not really. But given that discoverability can be low for things like that, it's usually helpful at finding me things to, at the very least, look into.

s.gif
as an fyi its hallucinated libraries for me when writing python code, importing things that dont exist.
s.gif
On the other hand, it can write hypothetical example code for a Python binding or conversion of a library in another language.
My big question is, are people at OpenAI actively using GPT as part of their work? Are they getting a productivity boost because of it?
s.gif
Yes, apparently.

> We’ve also been using GPT-4 internally, with great impact on functions like support, sales, content moderation, and programming. We also are using it to assist humans in evaluating AI outputs, starting the second phase in our alignment strategy.

Source: https://openai.com/research/gpt-4

I use it to give me the perfect functions for any task I can dream up in Google Sheets. Some conplex functions seem like a novel I never would have made, but it works very well so quickly and when it errors, it nearly always can debug. It understands the Maps for Sheets extension well and I'm trying to teach it to master GPT for Sheets.
I am recording all my calls and building a Q&A bot over the transcripts.
It's a very good doc search. if you are working with new systems, it's insanely efficient for learning
I have asked ChatGPT tax questions, some of which I knew the answers to and others not. I think it’s a great summarizer if the spammy blogs out there.

It’s great for bash one-liners with flags and OS-specific nuances.

Haven’t really been able to use it for really advanced things. But maybe someday.

s.gif
Be careful with tax questions.

I asked some for a moderately complicated tax situation and ChatGPT very authoritatively imagined a deduction that didn’t exist by combining two different but related parts of the tax code.

I ask it for functions in Google Sheets and it gives me the perfect thing I never would have come up with myself, sometimes functions a mile long. I also explained the Maps for Sheets and GPT for Sheets extensions, and it is a master of both.
- Having it write Ansible tasks for me to automate setting up my infra. - Generating test data for integration tests - Create sample data for demos and marketing material.
I used ti understand the US Healthcare industry as an non -US person.

I often use it to just gain ideas when I'm stuck with any situation, eg: "How to deal with non-responsive stakeholders?"

I was writing ffi bindings for a lib in Rust and asked it to write out all the structs for the external lib with any needed serde attributes and it worked very well.
Code generation such as converting json to F# types.
I use it to do bizarre linguistic stuff with my writing (prompts such as: "convert this sample of modern day English to french-Canadian gibberish as heard by an 18th century poet at a pub"). I use it to mess around with my parents. I use it to help with difficult math problems, sometimes. For me? It's been a terrific assistant.
Lately I’ve been doing a lot of very deep discussions with GPT-4 about design issues I’ve had, and brainstormed ideas on how to improve them. In these discussions, GPT-4 has been able to really dig into the details and grok the problem space, and has surprised me with the quality and brilliance of its ideas and suggestions. Some of these I have tried, things that I would not have thought of, and it turns out the suggestions really helped solve the challenges I was facing. To me it’s like working with a brilliant person who can only communicate via text, can’t look anything up on the internet, and can’t remember more than a few pages of text at a time, but aside from those limitations is a genius.

Lots of people talk about generating code with Chat GPT, but to me its real value is in having deep detailed discussions about design problems.

It’s been so successful at this that recently I gave GPT-4 the full interview design skill assessment that I give to engineers when I interview them at Microsoft. GPT 3 wouldn’t be able to handle this, but what GPT-4 did here astonished me. My assessment is that this is a principal level performance. It didn’t have to do other things that normal candidates have to do but for this raw skills assessment of design skills it was spot on. It would have impressed me even more if it figured its final solution out from from the beginning, but that’s what it gets for blurting things out before thinking about them, which people do, too.

The important thing I’m communicating here is not that I am impressed because it’s amazing that a computer can do this stuff; I’m impressed at what it has done here compared to almost every human I’ve ever walked through this question with. The approach I used here is the same I use when asking candidates this question, because in addition to testing their ability to code, I’m looking for how well I can understand the candidate’s ideas, and how well they can understand and then apply my own ideas when I ask them to take a different approach on certain things than they were thinking of themselves. This is one area where many great coders struggle; they can code like the wind when it’s their own idea but struggle to work collaboratively. This kind of mental flexibility, ability to think of things in a different sequence or consider other ways to solve the problem after thinking of their own solution is also a required skill often in real-life meetings and other collaborative settings. I’d rate this candidate as an outstanding, top-notch collaborator.

My point here is that using it only to write boilerplate code is a waste of its best value. My suggestion is to get GPT-4 with its larger token limit, and talk strategy with it. Tell it all about your biggest challenges at a level of detail that would exhaust a normal human, and talk through ideas of how to improve your world.

Talk to it about your people problems too. It’s an astonishingly wise counselor who has a wealth of positive insights and suggestions. It’s also great for elegantly wordsmithing things.

Don’t miss out on the chance to collaborate with this endlessly creative and endlessly patient collaborator.

I've been a paying ChatGPT subscriber for about three or four weeks now. One day I had a few thorny work issues, asked the free version, and it was so good I got a subscription.

Since then I've asked some about general knowledge, history, religion, geography, politics, other topics of interest. Mostly in English, but some in Portuguese and a little in Spanish. It's extremely good in all three languages.

Mostly though I've been asking about random work topics that come up every day. We use a lot of lots of systems, tools at work, and I need to write software to handle diverse areas. ChatGPT cuts right to what I need as far as: (a) general knowledge of tools and what their purpose is; (b) surveys of categories of tools, comparisons between competing offerings; (c) specifics on how to use, configure, program against various tools, query data, change things; (d) questions on best practices and pitfalls. This is mostly in context of macOS, Linux, AWS, kubernetes, observability tools, and APIs for lots of DevOps-related systems. I do lots of coding in Python, I also do a lot of ad-hoc diagnosis of situations. (We have a great DevOps team that manages infrastructure with standard DevOps tools -- my job is to build what those tools don't address so well, and also to help build out future data-engineering efforts.)

I'd say my use of Google search to find relevant articles / pages has gone down 70%. One small example today, I wanted to use `jq` to process some `docker ... --format json` output to pull out some data. I don't want to learn the ins and outs of `jq`, I described my problem and it gave me a good template I could adapt.

Any time now in scrum or other meetings, if there's any question about something, we often just consult ChatGPT during our Zoom/screen-share sessions. I think generally I have a better sense on how to structure questions and question progressions to get quick answers than some others.

I've also found ChatGPT makes up stuff sometimes ... but it's usually close enough.

One comfort I have is that, at least for now, ChatGPT can't direct the overall organization of code for the many situations I need to address, so I'll have a job for a while. It does though fill in the knowledge gap at the edges, I don't waste near as much time searching for and reading documentation and examples. ChatGPT usually has good ready-made low-level examples when I need them, and high-level descriptions of tradeoffs and best practices.

I'm "committed" to Jetbrains tools, been using them for a long time. Today I began wondering what I might be missing from CoPilot, downloaded the CoPilot plugin for PyCharm (would also work for IDEA, DataGrip, etc.). I couldn't get the CoPilot plugin to log into Github and saw that others have had similar problems in the past ... so I can't use CoPilot yet. Maybe in a week or two I'll have a basis for comparison. (I don't want to switch to VSCode.)

EDIT: word choice, minor clarifications

Every morning I ask it whether it has headache. Then I smack it with a monkey wrench.
Me: what do you think about Robert Frost

GPT: I speak the road plausibly travelled.

For business, I ask it to generate plans.
s.gif
Interesting. I asked if for a business plan and it gave me a bunch of useless cruft. I guess that's the plausible average of public business plans anyway.
- Creating JSON out of unstructured Text.

- Writing SOPs

- write Email

- Tone Analysis

- Recommendation Engine

* writing powershell scripts

* How to write js code using d3 to animate swapping of two bars in a bar chart? (No other context given, it gave me a v good, working answer)

* what is a convertible note?

* In this it failed - can u help me setup auth to my react app using nextjs, for deployment in vercel?

* next I'm planning to ask several questions to understand state of the art in child education, there are many methods like montessori, Waldorf etc and I want it to provide a summary

* I need to keep adding this flag to make npm build to succeed, plz tell why I need it and how to solve the real problem (it explained beautifully and now I'm using yarn as a result)


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK