Would like to propose "Fractal Moral Hazard" as a descriptor of generative AI:
-
Would like to propose "Fractal Moral Hazard" as a descriptor of generative AI:
No matter which part and at what detail level you choose to look at the technology, you will discover a morally questionable or repulsive aspect of it.
-
Would like to propose "Fractal Moral Hazard" as a descriptor of generative AI:
No matter which part and at what detail level you choose to look at the technology, you will discover a morally questionable or repulsive aspect of it.
@hzulla It doesnt quite capture the nature of the beast. If you zoom to the core, as you would have to do with a real fractal, its just a generative statistical model applied to tokenized language, bitmaps etc. It is no more intrinsically wrong or immoral than logistic regression.
It is, alas, society itself that is the problem. Layer upon layer of malfunctioning patterns: captured politicians, greedy investors, digitally illiterate public etc. Such a society can produce monsters from anything.
-
@hzulla It doesnt quite capture the nature of the beast. If you zoom to the core, as you would have to do with a real fractal, its just a generative statistical model applied to tokenized language, bitmaps etc. It is no more intrinsically wrong or immoral than logistic regression.
It is, alas, society itself that is the problem. Layer upon layer of malfunctioning patterns: captured politicians, greedy investors, digitally illiterate public etc. Such a society can produce monsters from anything.
@hzulla the silver lining is that a better society could pick the same raw ingredients (chips, networks, programming abstractions, mathematical algorithms etc.) and do something amazing with them.
Its not utopic: linux, wikipedia, openstreetmap and so many other great things have happened (including this fediverse). Its just that digital tech confers superpowers and the positive forces in society were slower to recognize it than the dark side.
-
@hzulla It doesnt quite capture the nature of the beast. If you zoom to the core, as you would have to do with a real fractal, its just a generative statistical model applied to tokenized language, bitmaps etc. It is no more intrinsically wrong or immoral than logistic regression.
It is, alas, society itself that is the problem. Layer upon layer of malfunctioning patterns: captured politicians, greedy investors, digitally illiterate public etc. Such a society can produce monsters from anything.
@openrisk @hzulla „Technology itself is neutral“ is a strawman argument because relevant technology is always used by someone, in a concrete context and with a concrete goal in mind. You can argue that a statistical generator itself is neutral, but that is purely apologetic when all the relevant uses are bad.
The first goal must always be: Stop the current evil usage! And only then it’s worthwhile looking at potentially „good“ use cases. -
@openrisk @hzulla „Technology itself is neutral“ is a strawman argument because relevant technology is always used by someone, in a concrete context and with a concrete goal in mind. You can argue that a statistical generator itself is neutral, but that is purely apologetic when all the relevant uses are bad.
The first goal must always be: Stop the current evil usage! And only then it’s worthwhile looking at potentially „good“ use cases.@jlink @hzulla The thesis we are arguing is that "AI" is an immoral fractal all the way down. My argument is that at the bottom of it, its simply statistical processing of data, something that is happening already in some form or another for more of a century and in many sectors. Some of these uses are good, some are bad and they are never uncontroversial (they are just models). To treat "AI" as something else simply misidentifies what are the real problems (which I indicated).
-
@openrisk @hzulla „Technology itself is neutral“ is a strawman argument because relevant technology is always used by someone, in a concrete context and with a concrete goal in mind. You can argue that a statistical generator itself is neutral, but that is purely apologetic when all the relevant uses are bad.
The first goal must always be: Stop the current evil usage! And only then it’s worthwhile looking at potentially „good“ use cases.it's also just plain false. a gun isn't "neutral", it's made to kill people and that's all it does.
"but there are valid uses for an LLM" yeah and there are valid uses for gunpowder, but we're not talking about the general concept of gunpowder, we're talking about a fucking gun.
the LLMs deployed in the current hype craze are made to consume a nation's worth of fossil fuels to give plausible-sounding responses to general input so a CEO can feel like his dick is big enough. and that's all they do.
-
it's also just plain false. a gun isn't "neutral", it's made to kill people and that's all it does.
"but there are valid uses for an LLM" yeah and there are valid uses for gunpowder, but we're not talking about the general concept of gunpowder, we're talking about a fucking gun.
the LLMs deployed in the current hype craze are made to consume a nation's worth of fossil fuels to give plausible-sounding responses to general input so a CEO can feel like his dick is big enough. and that's all they do.
@malicethegray @jlink @hzulla well I could also argue its an apoplectic reaction but mastodon encourages me to be calmer.
In that metaphor, are LLM's guns or are they knives?
If we accept at all the use of statistical models in society, they are just the latest generation of knives and it all boils down to the context of their production and use. Which is *highly* amenable and not intrinsic.
-
@malicethegray @jlink @hzulla well I could also argue its an apoplectic reaction but mastodon encourages me to be calmer.
In that metaphor, are LLM's guns or are they knives?
If we accept at all the use of statistical models in society, they are just the latest generation of knives and it all boils down to the context of their production and use. Which is *highly* amenable and not intrinsic.
Reversing the dystopic use of digital technology will require quite a bit more than stylized demonstrations of anger.
The challenge goes very deep and touches every aspect of digital tech: Are mobile devices guns or knives? Should we ban them because currently they are guns in the exclusive service of surveillance adtech? Well, in the scheme of things it would take literally nothing to liberate people from surveillance if there was a widely usable FOSS mobile.
-
Reversing the dystopic use of digital technology will require quite a bit more than stylized demonstrations of anger.
The challenge goes very deep and touches every aspect of digital tech: Are mobile devices guns or knives? Should we ban them because currently they are guns in the exclusive service of surveillance adtech? Well, in the scheme of things it would take literally nothing to liberate people from surveillance if there was a widely usable FOSS mobile.
@openrisk you're a real fucking prick you know that
-
P Pteryx the Puzzle Secretary shared this topic on