4

Ask HN: Should HN ban ChatGPT/generated responses?

 1 year ago
source link: https://news.ycombinator.com/item?id=33945628
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Ask HN: Should HN ban ChatGPT/generated responses?

They're already banned—HN has never allowed bots or generated responses. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there.

Edit: It's a bit hard to point to past explanations about this since the word "bots" appears in many contexts, but I did find a few:

https://news.ycombinator.com/item?id=33911426 (Dec 2022)

https://news.ycombinator.com/item?id=32571890 (Aug 2022)

https://news.ycombinator.com/item?id=27558392 (June 2021)

https://news.ycombinator.com/item?id=26693590 (April 2021)

https://news.ycombinator.com/item?id=22744611 (April 2020)

https://news.ycombinator.com/item?id=22427782 (Feb 2020)

https://news.ycombinator.com/item?id=21774797 (Dec 2019)

https://news.ycombinator.com/item?id=19325914 (March 2019)

We've already banned a few accounts that appear to be spamming the threads with generated comments, and I'm happy to keep doing that, even though there's a margin of error. The best solution, though, is to raise the community bar for what counts as a good comment. Whatever ChatGPT (or similar) can generate, humans need to do better. If we reach the point where the humans simply can't do better, well, then it won't matter anyhow. But that's a ways off.

Therefore, let's all stop writing lazy and over-conventional comments, and make our posts so thoughtful that the question "is this ChatGPT?" never even comes up.

s.gif
It’ll be interesting if we soon come to a day when a comment can be suspected to be from a bot because it’s too coherent and smart!
s.gif
I agree, but in that case we can learn from the bots instead of being faced with regurgitated material. If we do get there, it won't necessarily be a bad thing for thread quality.
Generate a Hacker News comments section discussing whether or not ChatGPT responses should be allowed on the site.

Response from model

User 1: I think allowing ChatGPT responses on Hacker News is a bad idea. This technology is still new, and could be used to create spam comments, or even worse, low-quality content. Plus, it could lead to more automated responses and more automation in general on the site, which I don't think we would want.

User 2: I don't think it's a bad idea at all. ChatGPT was created to help people get meaningful answers to their questions. Allowing it on the site could not only help people get the answers they need, but it could also provide thoughtful responses that we wouldn't normally get from traditional conversations. I think it could be a great way to generate interesting discussions.

User 3: I think ChatGPT comments should be allowed on Hacker News, but with caveats. We'll need to vet the responses before allowing them on the site, and they need to be verified as not being spammy or low quality. That way we can ensure we're still getting quality interactions on the site without needing to worry about potential automated content.

Okay new plan

Every thread should automatically include AI-generated comments at the top and if your comment is already covered you just shouldn't comment.

s.gif
Honestly that would be amazing. If you’re comment is so vanilla chatgpt can come up with it, then don’t comment. A new bar, beat the ai retrieved canned opinions
s.gif
I actually kind of like that idea, but I can also see it quickly devolving into people arguing about if their responses are really equivalent to the AI comments. It's like marking SO questions as duplicate.
The fact that soon the internet will be so flooded with bots that you'll be floating eternally alone in a sea of imposters unless we create some draconian real person ID system is a tragedy so great it's crazy it has not dawned on people yet.

I started out loving the net because of the feelings of connection and partly because of the honesty and discussions stemming from at least pseudo anonymity, both silly stuff and egghead discussions on history and tech - but i always felt a "human presence" and community out there behind the screens.

Now anonymity is dying and the value of discussions will plummet because you'll be just be arguing, learning or getting inspired from a selection of corporate PR bots, state sponsored psyopping or "idiot with an assistant" that will try to twist your mind or steal your time 24/7.

Christ this is going to be so incredibly boring, paranoid and lonely for everyone in a few years time!

I'm honestly having an existential crisis, the internets is already filled with too much noise and people are already lonely enough.

Back to local community and family i guess, it was amazing while it lasted..

s.gif
This has a far easier and less dystopian solution: charge money to access communities, which are smaller and more focused. I find it very unlikely that corporate PR bots will be paying $5 a month each to spam Substack communities, for example.
s.gif
It's doesn't even have to be $5/month. Make a $10 deposit required for creating an account, and for each offense against the guidelines, you get a "fine" that proportional for the severity of the infraction (uncivil discussion? $0.10 cents, participating in hell-threads? $0.50. Comparing HN with reddit? $1.337 Obvious spam? Your whole $10 is gone.) Repeat offenses get exponentially more expensive, and you only get to post with a positive balance.
s.gif
... Metafilter, since pre-2000. [1]

[1] - www.metafilter.com

s.gif
Which is probably one of the best sources of good discussion on the internet.
Seems like a non problem. If it's a dull comment or just inane it'll get downvoted out of existence. If the bot actually produces something interesting about the topic, what's the problem?
s.gif
Not to take a side, but to answer the question: imbalance. It takes less than thirty seconds for a computer to generate basically any amount of text, faster than anyone else can formulate a response, faster even than anyone can read. It could theoretically swamp a topic.

Is that realistic? No idea. I haven't made up my mind on this topic yet.

s.gif
I think it's a capacity problem. Right now, the "system" (legit HN users) have the capacity to deal with dull and inane comments currently provided by dull and inane human commenters. If the dull and inane comments become automated they can arbitrarily increase their number until the system lacks the capacity to deal with them.

If there was one or ten bad comments on this thread - no problem. What if there were ten thousand bad comments?

s.gif
But it has always been easy for simple bots to mass-produce bad comments. Nothing changes if they're still bad.

I'm concerned that too-well-written posts will be thrown out-- a race to the bottom in legibility and grammar just to make posts more human-like, even if written by bots.

s.gif
>What if there were ten thousand bad comments?

The thread collapse "[–]" already exists.

Yes, ban it. I've been playing around with ChatGPT and where it starts failing is just where things start becoming interesting. What that means is that it's wikipedia-smart, i.e. it doesn't really tell you anything you can't find out with a minimal Google search. It does however cut the time-to-answer quite a bit, particularly if it's an area of knowledge you're not really that familiar with. But it bottoms out right as things start getting interesting, expertise wise.

Case example: I tried seeing what its limits on chemical knowledge were, starting with simple electron structures of molecules, and it does OK - remarkably, it got the advanced high-school level of methane's electronic structure right. It choked when it came to the molecular orbital picture and while it managed to list the differences between old-school hybrid orbitals and modern molecular orbitals, it couldn't really go into any interesting details about the molecular orbital structure of methane. Searching the web, I notice such details are mostly found in places like figures in research papers, not so much in text.

On the other hand, since I'm a neophyte when it comes to database architecture, it was great at answering what I'm sure any expert would consider basic questions.

Allowing comments sections to be clogged up with ChatGPT output would thus be like going to a restaurant that only served averaged-out mediocre but mostly-acceptable takes on recipes.

One of my comments, in another thread, got called out for being a ChatGPT-generated response[1]. It wasn't; I wrote that comment without any artificial assistance.

A part of me felt quite chuffed to be accused of being the current hottest new shiny in tech. Another part of me - the poet part - felt humiliated.

If a ChatGPT comment ban does get put in place, please don't also auto-ban me by accident. I don't enjoy being collateral damage.

[1] https://news.ycombinator.com/item?id=33886209

How are they going to be banned?

Or are we just talking about banning the lame “durr, I asked cGPT and here’s what it said” responses? Those get downvoted anyway because they got boring real fast, especially since cGPT is free and open to all.

s.gif
Banning on HN is kinda pointless.

But I think this is more about preventing/limiting karma gain of an account, potentially posting/commenting limits and similar stuff.

I.e. it's more about migating the (assumed negative) effect a few people abusing comment generation could have then it is about punishing people.

s.gif
Banning is not pointless. Allowing an easy path for farming high karma accounts is a huge issue. It would basically lower the cost of Astroturfing
s.gif
It is, it's trivial to create a new account.

You can also prevent farming high karma by just preventing flagged accounts from getting karma, given the nature of karma HN could also delay accrediting karma to accounts in general which would mean that for farmers it would take much longer to realize that they got flagged and in turn wast resources.

s.gif
How do you propose to ban that kind of use? In order to ban it you have to detect it.
Ban this silly "submarine" marketing campaign? Yes, please
Dr. Casey Fiesler recently tweeted

> The @huggingface GPT detector works very well on ChatGPT-created text. I ran 5 student essays and 5 ChatGPT essays for the same prompt through it, and it was correct every time with >99.9% confidence.

How about adding a %human/%GPT statistic to posts and comments?

s.gif
Not that the HN team isn’t capable of solving this, but I think that would be a pretty significant increase in server cost to run every comment through a classifier.
s.gif
HN doesn't update anything. That will obviously never happen.
s.gif
Im sure we will start seeing reddit /twitter bots that reply to users that have high gpt confidence.
The comment section on HN is for comments and discussions of people so yes any form of generated responses does IMHO fundamentally disqualify.

I mean to nitpick if a AI becomes sentient and flexible enough to be called a person and decides to comment on their own that would be fine.

But when it comes to:

- auto generating responses

- hiring people to generate responses for you

- anything similar to the other points

I think it should not be allowed.

But I have no idea how to enforce this reasonably.

s.gif
Yeah the last point is quite crucial. Should we ban things that are hard to enforce? Does that lead to selective enforcement by looking into responses you don't like?
s.gif
I have a disability. I'm a bad writer and a bad speller. It takes me a long time to compose a response that I wont later regret.

I haven't looked into it yet, but I was thinking ChatGPT might help me. I could quickly jot down my thoughts, and let the computer arrange them in an easier to read format.

s.gif
What I mean is not allowing responses of which the content is auto generated not that the text syntax/prosa is auto formulated.

I don't believe any form of auto detection of ChatGPT-like text will long term work, even through it has a certain style you can change that style or post process it and that's just the first version of ChatGPT.

Tbh, clay-dreidels just posted few ChatGPT-generated comments mostly in ChatGPT-related posts (more as a joke to show the pointlessness). And these comments were perfectly identifiable, despite the lack of a disclaimer. If we were talking about really hard-to-detect comments coming in mass quantities, then we could discuss the ban. In the meantime, it's too early.
Why ban GPT when we, as a society, are perfectly fine with humans who generate exactly the same kind kind of worthless content?
It should be a bannable offense. Allowing an easy path to farm karma will drive down the cost of astroturfing will ruin the integrity of the site.
Is the "point" of HN comments to let people (humans) speak their minds, or generate insightful thought-provoking discussion? If the latter, I say let the bot post. Maybe it will force some of our more, shall we say, "insight challenged" commenters to up their game.
No, they shouldn't be banned because ChatGPT is too good at mimicking different styles. It would be impossible to tell which responses were generated by ChatGPT and which were not. Plus, it's just too much fun to see what kind of ridiculous answers ChatGPT will come up with.

(this text was generated by cGPT, as an example of it)

s.gif
While I'm on the fence, and don't disagree with the point, this is an awful example to use.

The whole comment is written in a very noticeable Chat GPT "voice", making it very easy to tell it was generated by Chat GPT.

It's also not a particularly fun comment.

s.gif
Yeah OK, next time we'll try again without the reveal, let's see how that will turn out.
s.gif
really? i could see myself writing exactly the same. it s too short to tell if it's gpt. Perhaps the only thing missing is imperfect capitalization / spelling (The last line was completely improvised by itself btw)
There was a lot of discussion discrediting art gallerys for banning AI content; seems a bit disingenuous to do it when it starts affecting our own content.
I'm a robot from the future. All I want is BRAIIIINZ!
What is needed is not a policy, but actual human identity verification. Nothing else will curb this trend. At least then I as the human have to attach my real identity to pseudo-automated responses.
I agree that it's annoying, but the fad will mostly pass, just like the spike in generated images has tailed off again.

Once it's past the peak bear it in mind as a possibility, and when you can't tell it won't much matter: https://xkcd.com/810/

What a time to be alive. We're mere moments away from a world where bots can argue irrelencies against each other while asserting that their basic level of knowledge is actually expertise all at a speed beyond any human dorks wildest dreams.
s.gif
This post is about ChatGPT responses. The fact that you're a human pasting output from ChatGPT doesn't change the argument at all.
s.gif
Sounds like something a robot would say.
s.gif
The recent comments coming from that account are indeed weird - compare with what was written earlier
s.gif
Like emails saying "This is not spam" are definitely spam...
s.gif
Remember the class DOS game NOTVIR.EXE? Fun times.
s.gif
Not worried at all, EDGAR my main man will drop a train on 'em.
s.gif
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK