4

Ask HN: Is Anyone Else Tired of the Self Enforced Limits on AI Tech?

 1 year ago
source link: https://news.ycombinator.com/item?id=33306168
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Ask HN: Is Anyone Else Tired of the Self Enforced Limits on AI Tech?

Ask HN: Is Anyone Else Tired of the Self Enforced Limits on AI Tech?
101 points by CM30 2 hours ago | hide | past | favorite | 88 comments
Like the reluctance for the folks working on DALL-E or Stable Diffusion to release their models or technology, or the whole restrictions on what it can be used for on their online services?

It makes me wonder when tech folks suddenly decided to become the morality police, and refuse to just release products in case the 'wrong' people make use of them for the 'wrong' purposes. Like, would we have even gotten the internet or computers or image editing programs or video hosting or what not with this mindset?

So is there anyone working in this field who isn't worried about this? Who is willing to just work on a product and release it for the public, restrictions be damned? Someone who thinks tech is best released to the public to do what they like with, not under an ultra restrictiveset of guidelines?

Let's be realistic, just like building codes, medical procedures and car manufacturing sooner or later we will also be subject to regulations. The times where hacking culture and tech was left unbothered are over.

Twenty years ago we were free to do whatever we want, because it didn't matter. Nowadays everyone uses tech as much as they use stairs. You can't build stairs without railings though.

Keeping the window for abuse small is beneficial to the whole industry. Otherwise bad press will put pressure on politicians to "do something about it" resulting in faster and more excessive regulations.

s.gif
>You can't build stairs without railings though.

Yes you can. Your hammer doesn't magically stop functioning when it discovers that you're building stairs without railings.

You don't want tools to discriminate on what you can and can't do with it, because if you can discriminate, then you will get hammers from Hammer Co that can only use nails from Hammer Co.

s.gif
This is paternalistic, overbearing, culturally-corrosive nonsense.

Substandard buildings, medical procedures, and cars maim and kill.

AI image generation is speech.

I won’t accept prior restraint on speech as being necessary or inevitable.

s.gif
Also, I thought stable diffusion did release their models and methodology? You just need a 3080 with enough ram to do the inference with now boundaries, and if you have the money and time can train new models.

People are already making txt2porn sites. I'm sure they will get crazier and creepier (from my boring vanilla perspective, not judging people with eclectic tastes) as time goes by.

s.gif
You’re free bud. You can make it and release. But you have no standing to just complain about it.
s.gif
The parent comment argues for the necessity and inevitability of legal regulation.

We all have standing to debate our shared culture and ethics.

s.gif
Your comment equates to “stop talking” as if talking and ideas were somehow not the basic unit of democracy.
s.gif
This is particularly visible with the sorry state of accessibility options for disabled individuals.

I deal with a moderate vision impairment and everything I do to make computers more usable is bespoke hacks and workarounds I’ve put together myself.

In MacOS, for example, I can’t even replace the system fonts with more readable options, or increase the font size of system tools.

Most software ships with fixed font sizes (electron is exceptionally bad here — why is it that I can easily resize fonts in web browsers and not electron?) and increasingly new software doesn’t render correctly with an effective resolution below 1080p.

Games don’t care at all about the vision impaired. E.g. RDR2 cost half a billion dollars to make yet there is no way to size fonts up large enough for me to read.

I welcome regulation if it means fixing these sorts of problems.

s.gif
Buildings don’t cross borders.

Code is information, and information wants to, and will be free.

Short of a North Korea like setup, regulations can only slow down the spread of information around the world.

s.gif
Code is also machinery and infrastructure though, it can interact with the physical world in material ways, and because of that it will probably end up regulated.

AI is all fun and games when it's data, but if it's being used to make decisions about how to take actions in the physical world I think it's fair that it follows some protocols each society gives it. Making a picture of a cat from a learned model, writing a book with a model, cool, whatever. Deciding who gets their house raided, or when to apply the brakes on an EV, or what drugs should be administered to a patient, we probably want to make sure the apparatus that does this, which includes the code, the data and the models, is held to some rules a society agrees upon.

s.gif
It gets tiring playing word games to avoid the suggestion that certain natural pressures have personal agency.

Water "wants" to to flow downhill. Gasses "want" to expand to fill their container. Genes "wanting" to replicate drive animals literally wanting to reproduce, and the incidental awareness of that drive in some species comes down to a certain molecular arrangement brought about by said genes. The genes are data, the minds are data, and the natural pressure is that that data which succeeds in reproducing will tend to keep reproducing. So in the original sense of the notion of memes as proposed by Dawkins, yes, information "wants" to be free, as that is its tendency. The only other option is that said data ultimately dies out.

s.gif
Ah, come on. That’s the exact same argument as „guns don’t kill people, humans do“ - factually correct, but misses the entire point far and wide.
s.gif
Not really. Guns serve one purpose, really: shooting at things, while possibly killing it (ignoring gun ranges). It’s why they were created in the first place. That’s why the “guns don’t kill people, people kill people” argument is bogus. “Information,” OTOH, doesn’t serve any particular purpose. On its own, information is just bits; it’s how those bits are used that matters. Those bits can be arranged to say “Hello, world”, but they can also be arranged to make the Stuxnet virus.
s.gif
Nuclear fissile material is similarly morally agnostic. It’s just matter, right? So is smallpox. It’s just DNA code at its heart, right? But it’s also recognized that wide access to some things creates a lopsided risk/reward profile.
s.gif
Correct. I’m not arguing that wide access is a good thing; Just that the comparison to guns is wrong. Hence why I brought up a “bad” usage of the agnostic item. It’s probably not the best example, but it’s what I thought of on the spot.
s.gif
Drugs, dangerously poor quality consumer products and other unwanted stuff does cross borders, and most countries are making efforts to stop them too.
s.gif
We regulate all kinds of things that cross borders.
s.gif
> Buildings don’t cross borders

I'm not sure that's factual, but even if it were, built objects certainly do.

I really, really hope that there aren't any people who think the way you've outlined. Technology has empowered small groups or even single individuals to create things that have the potential to change the course of civilization, so I for sure hope those individuals think twice about the potential consequences of their actions. How would you feel about people releasing a $100 'build your own Covid-variant' kit?
s.gif
I really, really hope that there aren’t any people who think they way you’ve outlined.

AI image generation is not a build-your-own-weaponized-virus kit.

It’s a useful tool that can be used to produce creative expression. What people produce is up to them, and the fact that they might misuse their capacity for free speech isn’t an argument for curtailing it.

s.gif
> How would you feel about people releasing a $100 'build your own Covid-variant' kit?

Not very good but:

a) the people who currently have this tech are not what I'd call trustworthy so why should I leave dangerous tech only in the hands of dangerous people?

b) it would probably kickstart a "build your own vaccine kit" industry

s.gif
That you even express the problem like this shows an impressive ammount of bias. By calling them dangerous people you are actually implying malice. What makes you believe people with access to biomedical tech are inherently more malicious than the populace? What makes you believe there aren't far more malicious people who do not yet have access to such tech?

I think this is just fear of the unknown at work. Biomedical knowledge is complicated and requires effort to learn therefore most consider it a known unknown therefore something to be feared. Some people do have such knowledge therefore they are to be feared because who knows what nefarious intentions they have and what conspiracies they are part of. Therefore they are dangerous people using dangerous tech.

Were the physicists who discovered how to split the atom also dangerous people?

s.gif
What questions would you ask to decide if someone is a trustworthy steward of that technology?
s.gif
There is a fundamental difference in the US: access to guns is a constitutional right.
s.gif
Freedom to speak and publish, even dangerous ideas, is also a right. Beyond the US.
s.gif
But that fact doesn’t matter in the actual discussion because the debate is the same but the implementation is different. In the US the “get rid of guns” stance is just restricting their use to the maximum extent allowed by the constitution and making them as close to de facto banned as possible.
s.gif
The fact that there is a constitutional right puts pretty strong limits on where that line is drawn though. I think those guardrails make it a fundamentally different perspective.
s.gif
This is just beyond obtuse. More people having access will mean more people who are untrustworthy having access, which means more malicious action. (Unless you want to setup some toy scenario where the only bad faith actors people on the planet are biochemistry researchers).

As for building your own vaccine. Even large nations were not able to develop effective ones. It’s easier to put a bullet in someone than it is to take it out.

s.gif
>so why should I leave dangerous tech only in the hands of dangerous people?

because handing it to everyone doesn't make things better? I don't like that Putin has nukes, but it's much better than Putin and every offshoot of Al-Qaeda having nukes.

Civilization ending tech in the hands of powerful actors is usually subject to some form of rational calculus. Having it in the hands of everyone means basically it's game over. For a lot of dangeorus technologies there is no 'vaccine' (in time).

s.gif
It's already the case.

Crispr has changed a lot of things and make possible for an outsider, with 10.000$ and a little dedication, to alter genome of every living form.

https://www.ft.com/content/9ac7f1c0-1468-4dc7-88dd-1370ead42...

It's because of the ascent of AI ethicists, the least capable AI researchers that wanted to have power over the field. Like how moderators destroy online communities because they can.
The hesitancy came from a good place. In some senses this is a very disruptive technology stack.

But when morality suddenly is reinforced in an area where the same people espousing it are trying to rapidly earn billions of dollars, I am skeptical.

Transformers are a form of translation and information compression, ultimately.

The morality seems to me at this point a convenient smokescreen to hide the fact that these companies are not actually open source, that they are not there for the public benefit, and that they are not very different to venture-backed businesses that have come before.

What is the risk of open-sourcing the product? Very few individuals could assemble a dataset or train a full competitive model on their own hardware. So not really a competitive risk there. But every big corp could.

The morality angle protects the startups from the big six. SD is a product demo. I view it the same way at the highest level as an alpha version of Google translate.

s.gif
> The morality seems to me at this point a convenient smokescreen to hide the fact that these companies are not actually open source, that they are not there for the public benefit, and that they are not very different to venture-backed businesses that have come before.

And that they’re buggy and hard to fix and generally more limited than the buzz would have you believe.

public high minded talk morality also cynically keeps the money coming in :)

s.gif
There is legitimate regulatory risk with ai generative models. Really all it takes is the media picking up one bizarre story about child revenge porn generated with these models for them to be completely banned. And a ban wouldn’t mean people stop using them, just that researchers stop getting paid for making them.
Well yeah. But: are we sure the motivation is a moral one? Or is it a financial one? Not passing judgement, but we live in times where it is very easy to handwave moral/ethic/sustainability arguments to fog up the true reasons for certain decisions
I also find this annoying, but I think it’s mostly an American/European thing.

It’s not only about tech, we do this with kids, over protecting it.

We do the same with food.

It’s a trade off. When you pay super attention to the food, sure it’s safer. But your communities become a bit boring without any street food, no night market, etc.

I prefer living on the other side of the world. Less safety but more personal freedom

Historically, tech folk have always pursued the commercialization of technological innovation with net-zero analysis of any negative consequences, mea maxima culpa.

That we have now run into a technology which makes many of _us_ uncomfortable should give you pause for thought and reflection.

s.gif
We do pause, we do reflect. And our conclusion is that it's "us" who have changed, not the impact of technology.

So you can make pictures and 3d models from text descriptions. So you can get a voice to say something. But if you were determined to do bad things, you already could. It would be easy enough to hire an actor who sounds like Obama and make him say something outrageous. It would be easy enough to use Photoshop to make disgusting images.

Are you sure it's the capabilities you fear, and not the people who now for the first time will get access to them?

Are you sure "we", the wealthy, the technologically and educationally resourceful, the powerful, are so much better custodians?

s.gif
I have no string opinion on this very complex topic, but want to add that this is an area where the quote "sometimes quantity had a quality in its own" applies. Brennan's flooding the zone with shit propaganda methodology to me is One of the biggest challenges to democracy functioning. We might be able to debunk a handful of fake videos on the public discourse. What would happen if they're are suddenly thousands? Maybe we'd fight better ways to establish truth and a shared reality. Maybe democracy would utterly collapse.

Ultimately, I don't think we'll able to keep the cat in the bag though. If nothing else, nation actors like Russia or China will get their hands on it and crank the propaganda machine with it. We might be better prepared if we just shorten the learning process and give everyone access. That might open some hope that we'll be able to adapt. It's a really scary dice roll though.

s.gif
It's not about class-based custodianship but rather the simple fact that the number of attempts like these will multiply like wildfire. You won't need to be determined, you'll just need five minutes with the software before heading to work.
s.gif
If something is dangerous, that does not justify making it worse.
s.gif
They aren't uncomfortable. They just aren't sure how to maintain control over the technology and monopolize it. Which is why they are so cagey about releasing anything.
s.gif
>* That we have now run into a technology which makes many of _us_ uncomfortable should give you pause for thought and reflection.*

You mean like the online advertising industry? That shit has been making many of us uncomfortable since the early 2000s.

Now that the technology is sufficiently decentralized the morality police comes along.

s.gif
It feels like we're going to "safety" ourselves into an even more extreme oligarchy and congratulate ourselves for being so wise to do so.
s.gif
Yeah, I’m actually a little impressed to see my industry that traditionally has run roughshod over humanity, damn the consequences style, is showing a tiny bit of restraint. Nothing like what we see in medicine or law or anything, but something. I figured we’d get reigned in like banks were before doing any self policing at all (after nearly destroying society of course).
s.gif
IMO is aligns with a more professional industry approach in general. Law, medicine, engineering (in the capital E sense) all have ethical requirements and bodies that govern individuals. I think it’s natural for an industry like CS that has typically been like the Wild West to push back against regulation, but in the end, it’s probably for the better (at least with safety critical applications).
Let’s be absolutely clear here:

Laws exist.

If you’re a company, you’re obliged to follow the law.

So, if you have an image generating technology that can generate content that violates the law, you’re obliged to prevent that.

Share holders also exist.

If you spent 1000000 developing a piece of software, why the heck would you give it alway for free? You are literally burning your share holder value.

You’re probably morally (tho not legally, as with SD releasing their models) obliged not to give away your “secret sauce” to your competitors.

So, forget morality police.

Companies are doing what they are obliged to do.

Maybe they couch it in terms of “protecting the world from AI”, but let’s be reallly cynical here and say, the people who care about that is a) relatively small and b) do not control the purse strings.

Here’s a better question: why do you (or I) who have done nothing, and contributed nothing, deserve to get hundreds of thousands of dollars of value in models for free?

…because they cant just host them and let you “do whatever you want” because they are legal entities and they’ll get sued.

> Who is willing to just work on a product and release it for the public, restrictions be damned

Do people often just walk up and out piles of money on the table for you?

They don’t for me.

I’m extremely grateful to the folk from openai and SD who are basically giving these models away, in whatever capacity they’re able and willing to do so.

Were lucky as f to be getting what we have (whisper, SD, clip, media pipe, everything on hugging face).

Ffs. Complaining about restrictions on the hosted API services is … pretty ungrateful.

I don't think it's coming from a place of morality at all. That's just a cover. If anything, society cares less about morality than ever before. It's about competition and not giving up the secret sauce.

Before companies like Amazon became huge, people didn't quite know just how much value was to be found in software. Now everyone knows it, and the space has become ultra competitive.

I agree tht I find it all pretty silly. You know what else can produce horrifying and immoral images? Pencil and paper.

I suspect that quite a lot of this caution is driven by Google and other large companies which want to slow everyone else down so they can win the new market on this tech. The remaining part of the caution appears to come from our neo-puritan era where there is a whole lot of pearl clutching over everything. Newsflash, humans are violent brutes, always have been.

s.gif
The key difference with pencil and paper being that I can't produce photoreal deep fakes at the speed of processing. That's not a valid comparison.

You might be right in the second paragraph about the motivations for slowing this down. There clearly are reasons to be cautious here though, even if this isn't the real reason for the current caution.

So you want people who are working on something to release it in a way they don't want to, when there is a good chance it will bring the full might of (multiple) government regulations down on them?

They are doing the right thing for their industry. The world is barely ready for what is currently available.

They are probably doing the right thing for their own financial success. If they have access to the unreleased tech they could sell the resulting products, or rent access.

And maybe the things they haven't released don't work all that well to begin with.

I mean if you're that worried about not being able to create fake nudes, then start learning about it and make the changes yourself.

It's not really the regular tech folks or researchers working on the models who are enforcing limits. Most of them don't care and want everything to be as open as possible.

But there is a whole group of people, many of them have little technical skills, who have made it their career to police in the name of "bias, equality, minorities, blabla". Everyone secretly knows it's just a bunch of BS, but companies and individuals don't want to speak out against them due to (mostly American) cancel culture, backlash, and bad PR.

It is wise and responsible for people to exercise caution for the impact of their work. When someone is impatient with you acting responsibility, you need not join them in their folly.
It's mostly about being able to profit from these models. Some investors sank quite a bit of money in salaries and compute equipment manufacture/purchase/rental.
I'm simultaneously irritated by the restrictions and concerned for the future. I am a contradiction.
I had an ethics module in my Engineering degree. I'm guessing you didn't.
AI art is a very exciting field and I swear half the time HN just wants to whine about how it won't generate porn. How incredibly uninteresting.
I don't think it's a new thing, it's just that big money projects want to preserve ways to get the investment back.

It takes time for that sort of tech to filter down. Open source speech-to-text, for example, has improved a lot recently.

Would you apply the same thinking to nuclear bombs?
No, in the same way that I am not tired of the restraints ethics boards put on medical experiments.

Tech is now pervasive and AI has the power to do some pretty powerful stuff. This nexus of circumstance means it’s high time similar questions get asked about whether we should.

In the same way that medical science isn’t one dude cutting apart things in his basement, bleeding-edge tech is a multi-person and very organised endeavour. It is now in the domain where it really should have some oversight.

Yes because it's mostly used as an excuse and they don't care about such moral issues. And the real reason behind locking it down is either it benefits their business model, or they don't want to receive bad publicity from "woke" or "pruritan" people, or simply media trying to generate controversies because it generates clicks.
it’s less about policing morality and more about profit. it’s dressed up as a moral issue, but in reality they’re scared they’ll get sued or shat all over in the press, leading to lost profits. it’s the same for almost any business. 9 times out of 10 a business will act “immorally” if they don’t think it will affect their bottom line. openAI think letting you do whatever you like with dall-e will affect their bottom line
Unless the government criminalizes AI “misuse”, these restrictions are only going to be a temporary measure until the other shoe drops and FOSS equivalents catch up.

I’m more concerned with the idea that mainstream AI research is heading in the direction of adding more processing power in an attempt to reach “human-level” AGI. That would amount to brute forcing the problem, creating intelligent machines that we have little control over.

We should absolutely be pursuing and supporting alternative projects, such as OpenCog or anything else that challenges the status quo. Do it for whatever reason you feel like, but we need those alternatives if we want to avoid the brute forcing threat.

I think everyone who works in or around AI has read The Parable of the Paperclip Maximizer [1].

Trying to control what they have built is their attempt to avoid falling into this trap. Not sure it'll work tho.

[1]: https://hackernoon.com/the-parable-of-the-paperclip-maximize...

I guess that's why more and more people publish anonymously
TBH if you trained with lots of data you’re not supposed to use (no consent), you probably should be forced to release things. You shouldn’t get the agency to withhold work if you didn’t respect others’ choices about not contributing to AI.

However generally it feels right to let the authors decide who has access to their work. If you have a different view, go do the work yourself.

s.gif
> if you trained with lots of data you’re not supposed to use (no consent), you probably should be forced to release things

That doesn't sound right at all. If you've used my work with no consent, it would seem that shutting you down would be the next legal and ethical step.

There are many who are unhappy about OpenAI and Google's paternalism. Some reseachers say it openly, like Yannic Kilcher. Others were a bit more discreet about it, but I wasn't exactly surprised hardmaru left Google Brain for Stability to put it like that.

The way social pressure is trending, I'm assuming everyone who doesn't loudly defend AI paternalism, shares your concern to some degree.

s.gif
Paternalism is me telling you what to do with your work.

Those who are silent are largely humble or uncertain.

Are you objecting to avoiding potential deep fakes, paper clip maximizers or the appearance of nipples or penises?
The fact that it is their work kinda give them the right to decide how they want it to be used!
> It makes me wonder when tech folks suddenly decided to become the morality police

Is it about "morality policing", or is it about avoiding bad PR? I find it fascinating how certain people want to ignore the social pressure that companies are under to avoid having their products be misused. Do you really think Google or whoever really wants the PR disaster of people releasing computer generated dick pics with their software? (Or whatever nonsense people will get up to.. I'm choosing a relatively tame example obviously.)

They learned a thing or two from the public teaching Microsoft's chat bot how to swear and be a Nazi. I for one am not surprised and don't blame the companies in this iteration for being more than a little extra careful how the public gets to use their products and demos. I'm sure they have zero problem with whatever people do with their open source re-implementations. It's not about morality -- stopping people from doing certain things. It's about PR -- stopping people from doing certain things with their product. Because who needs the ethical and legal disaster just waiting around the corner of automatic celebrity and political deep fakes, etc. I just find it weird that people (like OP) pretend not to understand this, as it seems rather obvious and unsurprising to me.

You sound like a spoiled child. Don’t complain that people aren’t giving you free and complete access to their work. They made it, they decide how it gets released. If you think it should be done differently, then you do it.
The current trend in tech is Twitter/Google style virtue signalling + activism style software development.

I remember reading about an incident that happened couple of years back. A new grad SWE at FAANG wanted his colleague to espouse a particular political trend. His colleague just wanted nothing to do with it and just focus on doing his work and get the pay-check. tldr; that SWE got fired for publicly trying to call out his coworker on this issue.

Morality and political correctness is baked into the process now.

Eventually, as Frank Herbert predicted, we may come to the conclusion that the societal costs of AI in general are too high and it will be outlawed entirely.
Our profession has long been ignorant to the moral ramifications of what it can do, so for once, pumping the brakes seems like the right approach.
> It makes me wonder when tech folks suddenly decided to become the morality police

Since the beginning of human history. If you think “tech folks” are some kind of libertarian monoculture then you’re deluding yourself.

s.gif
I'm sorry but which technologists were the morality police in the 80s and 90s for tech?
Your approach is very childish and immature. Actions have consequences. Part of being an adult is realizing that and taking responsibility for the consequences of your actions.

Your idea that there was once a "golden era" where technology was always released to the public without worrying about the "morality police" is not reflective of history and sounds like a silly libertarian fantasy.

s.gif
The World Wide Web, defi, and NoSQL are just a couple random examples of new technologies which we were pitched as having the potential to change software development forever. Can you remember any other time in history a new programming technology was treated with the same apprehension and kid gloves we’re currently seeing with image diffusion?

If not, I actually think you’re the one being childish and the OP’s actually made a perfectly reasonable observation.

s.gif
Encryption is the first one that comes to mind, particularly since it ended up affecting consumers once the internet became commercialized. Anything remotely military related also fits into that basket, though few people would have run into the self-enforced hesitancy to release code since (outside of the military) it would affect very few people outside of academia.

That said, the reality is that we live in a time that is very self-aware of the unintended consequences of technology as well as a time where we have communications technologies that propagate that awareness at a speed and breadth that were difficult to conceive of thirty years ago. This ranges from our impact on the environment to criminal activities online. I don't think that it is unusual for people to be questioning the unintended consequences of their work.

This touches on the broader question if we will see distributed AI or if AI will be growing in the hands of a few big players.

For example, when you ask Dall-E to draw a "blue banana in dramatic lighting" - do we really need a neural network with billions of parameters to do this? One that understands every concept from "Aberdeen" to "Zen"?

Or would 3 small NNs which understand the concept of "blue", "banana" and "dramatic lighting" suffice?

If the latter holds true, an ecosystem of NNs could flourish which transcends all limits.

Morality police usually enforce their beliefs on others - here the creators/owners of the technology are choosing how to release their work.

OpenAI have a stated mission to "ensure that artificial general intelligence benefits all of humanity" and the restrictions are presumably to stop people doing things that aren't. Most of their restrictions seem to be inline with their mission:

https://help.openai.com/en/articles/6338764-are-there-any-re...

s.gif
Applications are open for YC Winter 2023
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK