Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. Uncategorized
  3. So you’ve decided not to use AI.

So you’ve decided not to use AI.

Scheduled Pinned Locked Moved Uncategorized
11 Posts 4 Posters 87 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Chris TrottierA This user is from outside of this forum
    Chris TrottierA This user is from outside of this forum
    Chris Trottier
    wrote on last edited by atomicpoet@atomicpoet.org
    #1

    So you’ve decided not to use AI. Good for you—most people haven’t.

    OpenAI already has 800M users which is a faster adoption rate than any other online service—ever. That toothpaste isn’t just out of the tube, it’s currently base-jumping off the CN Tower.

    And this is why the moral victory speeches don’t land. You can personally reject AI, boycott it, block everyone who breathes near it. That’s a lifestyle choice, not a societal plan. The rest of the world has already moved on, including governments, corporations, schools, media, finance, and the bored teenager who just generated 900 anime raccoons at 3AM.

    So the real question isn’t “Should AI exist” because it already does. The real question is “Now what.” Now that the technology is here, scaling, and unlikely to be uninvented, what do we build to make sure it doesn’t bulldoze everything in its path.

    That means regulation, consent frameworks, licensing systems, labour protections, antitrust scrutiny, transparency requirements, and energy standards. It means figuring out ownership, compensation, authorship, cultural rights, and community control. It means deciding who benefits and who pays the cost.

    History gives us the same homework every time a transformative tool appears. Pretending we still live in the world before the tool never works. Responding strategically sometimes does.

    So stay opposed if you want. That’s your prerogative. Just don’t confuse a personal refusal with a plan for the future. The future is already happening. The only question left is whether we shape it or get shaped by it.

    fotoFiJ CyC 90s Script Kiddie9 4 Replies Last reply
    0
    • Chris TrottierA Chris Trottier

      So you’ve decided not to use AI. Good for you—most people haven’t.

      OpenAI already has 800M users which is a faster adoption rate than any other online service—ever. That toothpaste isn’t just out of the tube, it’s currently base-jumping off the CN Tower.

      And this is why the moral victory speeches don’t land. You can personally reject AI, boycott it, block everyone who breathes near it. That’s a lifestyle choice, not a societal plan. The rest of the world has already moved on, including governments, corporations, schools, media, finance, and the bored teenager who just generated 900 anime raccoons at 3AM.

      So the real question isn’t “Should AI exist” because it already does. The real question is “Now what.” Now that the technology is here, scaling, and unlikely to be uninvented, what do we build to make sure it doesn’t bulldoze everything in its path.

      That means regulation, consent frameworks, licensing systems, labour protections, antitrust scrutiny, transparency requirements, and energy standards. It means figuring out ownership, compensation, authorship, cultural rights, and community control. It means deciding who benefits and who pays the cost.

      History gives us the same homework every time a transformative tool appears. Pretending we still live in the world before the tool never works. Responding strategically sometimes does.

      So stay opposed if you want. That’s your prerogative. Just don’t confuse a personal refusal with a plan for the future. The future is already happening. The only question left is whether we shape it or get shaped by it.

      fotoFiJ This user is from outside of this forum
      fotoFiJ This user is from outside of this forum
      fotoFi
      wrote on last edited by
      #2

      @atomicpoet I'm paying attention to your arguments. Like many creatives, I'm wary of "AI" but know that term isn't a monolith. Plenty of proven uses in medical research for example.The issue for me is the way personal data is used and the increase of surveillance by the companies and state appartus. So control over the technology is the issue. So how do you suggest organizing, regulating etc. when large monopolies and the state control it?

      Chris TrottierA 1 Reply Last reply
      0
      • fotoFiJ fotoFi

        @atomicpoet I'm paying attention to your arguments. Like many creatives, I'm wary of "AI" but know that term isn't a monolith. Plenty of proven uses in medical research for example.The issue for me is the way personal data is used and the increase of surveillance by the companies and state appartus. So control over the technology is the issue. So how do you suggest organizing, regulating etc. when large monopolies and the state control it?

        Chris TrottierA This user is from outside of this forum
        Chris TrottierA This user is from outside of this forum
        Chris Trottier
        wrote on last edited by atomicpoet@atomicpoet.org
        #3

        fotoFi Good comment, and yes—“AI” isn’t one thing, and the surveillance angle is the part worth sharpening knives over.

        A few practical levers:

        • You can run models locally. When the compute lives in your basement instead of in a surveillance casino, the data-harvesting panic drops fast.
        • Efficiency is improving. We’re not going to be powering these things with a small sun forever.
        • Co-ops are a real option. If we can have credit unions and dairy boards, we can have community-owned models that don’t phone home to Palo Alto.
        • Governments aren’t immortal. Policies flip all the time. Pretending the current regulatory posture is permanent is how we lose.
        • And humans still have unfair advantages: taste, intent, context, and the ability to write something an AI couldn’t predict even with a thousand GPUs strapped to its forehead.

        So yes, be wary—but there are ways to wrestle this thing into something that serves people rather than turning all of us into product.

        1 Reply Last reply
        0
        • Chris TrottierA Chris Trottier

          So you’ve decided not to use AI. Good for you—most people haven’t.

          OpenAI already has 800M users which is a faster adoption rate than any other online service—ever. That toothpaste isn’t just out of the tube, it’s currently base-jumping off the CN Tower.

          And this is why the moral victory speeches don’t land. You can personally reject AI, boycott it, block everyone who breathes near it. That’s a lifestyle choice, not a societal plan. The rest of the world has already moved on, including governments, corporations, schools, media, finance, and the bored teenager who just generated 900 anime raccoons at 3AM.

          So the real question isn’t “Should AI exist” because it already does. The real question is “Now what.” Now that the technology is here, scaling, and unlikely to be uninvented, what do we build to make sure it doesn’t bulldoze everything in its path.

          That means regulation, consent frameworks, licensing systems, labour protections, antitrust scrutiny, transparency requirements, and energy standards. It means figuring out ownership, compensation, authorship, cultural rights, and community control. It means deciding who benefits and who pays the cost.

          History gives us the same homework every time a transformative tool appears. Pretending we still live in the world before the tool never works. Responding strategically sometimes does.

          So stay opposed if you want. That’s your prerogative. Just don’t confuse a personal refusal with a plan for the future. The future is already happening. The only question left is whether we shape it or get shaped by it.

          CyC This user is from outside of this forum
          CyC This user is from outside of this forum
          Cy
          wrote on last edited by
          #4
          lol, how many of those people were forced to use it. You want to eat? Don't want to get evicted? Better go use AI, or else!

          ...how many of those people are themselves bots?
          Chris TrottierA 1 Reply Last reply
          0
          • Chris TrottierA Chris Trottier

            So you’ve decided not to use AI. Good for you—most people haven’t.

            OpenAI already has 800M users which is a faster adoption rate than any other online service—ever. That toothpaste isn’t just out of the tube, it’s currently base-jumping off the CN Tower.

            And this is why the moral victory speeches don’t land. You can personally reject AI, boycott it, block everyone who breathes near it. That’s a lifestyle choice, not a societal plan. The rest of the world has already moved on, including governments, corporations, schools, media, finance, and the bored teenager who just generated 900 anime raccoons at 3AM.

            So the real question isn’t “Should AI exist” because it already does. The real question is “Now what.” Now that the technology is here, scaling, and unlikely to be uninvented, what do we build to make sure it doesn’t bulldoze everything in its path.

            That means regulation, consent frameworks, licensing systems, labour protections, antitrust scrutiny, transparency requirements, and energy standards. It means figuring out ownership, compensation, authorship, cultural rights, and community control. It means deciding who benefits and who pays the cost.

            History gives us the same homework every time a transformative tool appears. Pretending we still live in the world before the tool never works. Responding strategically sometimes does.

            So stay opposed if you want. That’s your prerogative. Just don’t confuse a personal refusal with a plan for the future. The future is already happening. The only question left is whether we shape it or get shaped by it.

            CyC This user is from outside of this forum
            CyC This user is from outside of this forum
            Cy
            wrote on last edited by
            #5
            Oh as for "now what" the answer is to freeze the accounts of its owners, repudiate all debt owed to them including investments, and then start a government reclamation and recovery office, to demolish all the data centers thus built, and attempt to restore the local soil to a point it can at least be used for cattle pastures if nothing else.

            When you say "It's here, you can't stop it, tough beans buckaroo" you have to understand that there is always a way to stop it. If not in the above controlled fashion, it'll happen due to catastrophic resource exhaustion, and hopefully relentless bombing by various illegal militias who know just how bad this stuff is if left to fester.
            Chris TrottierA 1 Reply Last reply
            0
            • CyC Cy
              lol, how many of those people were forced to use it. You want to eat? Don't want to get evicted? Better go use AI, or else!

              ...how many of those people are themselves bots?
              Chris TrottierA This user is from outside of this forum
              Chris TrottierA This user is from outside of this forum
              Chris Trottier
              wrote on last edited by
              #6

              Cy Exactly, that proves my point.

              Once a technology gets embedded into workflows, hiring, education, platforms, and basic survival economics, “just don’t use it” stops being a real option for most people. Choice becomes a luxury.

              Which is exactly why the conversation has to shift from personal refusal to labor protections, regulation, ownership, and governance. If people are being dragged into an AI-shaped world, we owe them guardrails—not vibes.

              1 Reply Last reply
              0
              • CyC Cy
                Oh as for "now what" the answer is to freeze the accounts of its owners, repudiate all debt owed to them including investments, and then start a government reclamation and recovery office, to demolish all the data centers thus built, and attempt to restore the local soil to a point it can at least be used for cattle pastures if nothing else.

                When you say "It's here, you can't stop it, tough beans buckaroo" you have to understand that there is always a way to stop it. If not in the above controlled fashion, it'll happen due to catastrophic resource exhaustion, and hopefully relentless bombing by various illegal militias who know just how bad this stuff is if left to fester.
                Chris TrottierA This user is from outside of this forum
                Chris TrottierA This user is from outside of this forum
                Chris Trottier
                wrote on last edited by
                #7

                Cy I mean, that’s ambitious. But freezing trillions in assets, tearing down data centers, and converting them into cow pastures isn’t a regulatory strategy—it’s a full regime change. And given AI is now a macroeconomic life raft funded by governments themselves, overthrowing the global economy feels… slightly harder than passing antitrust.

                1 Reply Last reply
                0
                • Chris TrottierA Chris Trottier

                  So you’ve decided not to use AI. Good for you—most people haven’t.

                  OpenAI already has 800M users which is a faster adoption rate than any other online service—ever. That toothpaste isn’t just out of the tube, it’s currently base-jumping off the CN Tower.

                  And this is why the moral victory speeches don’t land. You can personally reject AI, boycott it, block everyone who breathes near it. That’s a lifestyle choice, not a societal plan. The rest of the world has already moved on, including governments, corporations, schools, media, finance, and the bored teenager who just generated 900 anime raccoons at 3AM.

                  So the real question isn’t “Should AI exist” because it already does. The real question is “Now what.” Now that the technology is here, scaling, and unlikely to be uninvented, what do we build to make sure it doesn’t bulldoze everything in its path.

                  That means regulation, consent frameworks, licensing systems, labour protections, antitrust scrutiny, transparency requirements, and energy standards. It means figuring out ownership, compensation, authorship, cultural rights, and community control. It means deciding who benefits and who pays the cost.

                  History gives us the same homework every time a transformative tool appears. Pretending we still live in the world before the tool never works. Responding strategically sometimes does.

                  So stay opposed if you want. That’s your prerogative. Just don’t confuse a personal refusal with a plan for the future. The future is already happening. The only question left is whether we shape it or get shaped by it.

                  90s Script Kiddie9 This user is from outside of this forum
                  90s Script Kiddie9 This user is from outside of this forum
                  90s Script Kiddie
                  wrote on last edited by
                  #8

                  @atomicpoet "OpenAI already has 800M users which is a faster adoption rate than any other online service—ever."

                  And yet OpenAI isn't profitable, and isn't projected to be for years, and their flagship product is constantly plagued with the worst kinds of marketing snafus. ChatGPT encouraging delusional messianic thoughts in the mentally ill, playing an active part in preventing a teen with suicidal ideation from getting help and directly encouraging their suicide, and more mundanely simply being unable to do basic counting, math, and deductive reasoning (which it will never be able to do). All that, and the fundamental flaw of hallucination - which there is no solution for and makes LLMs essentially unsuitable for any kind of mission critical work.

                  Then there's the unavoidable fact that almost every single AI pilot in the corporate world is failing, and that most organizations that adopt AI tools have to mandate their use because people simply don't like them or want them. And "not being liked or wanted" is, sorry to say, not typically a feature of world-changing tech that will achieve universal adoption.

                  Then, there is the growing consensus that the current AI craze is an economic bubble, which could potentially collapse and cause a recession. Also not typical of inevitably world-changing technology. More typical of the dot com boom, or the NFT craze.

                  Then there is the history of AI hype and investment, which has swung between summer and winter throughout the information age. Most human cultural things do this. "Endless AI Summer" is, in the context of known history, unlikely.

                  Finally, I disagree with your "the genie is out of the bottle" assertion as evidence that we all need to get used to some theoretical new reality. Plenty of technologies fail in the marketplace and are thereafter never seen again or restricted to specialized functions and industries. Supersonic aircraft, gas turbine engines, crypto (which found success as an investment vehicle and money laundering tool but has literally no other utility and no impact of most people's everyday lives despite the things boosters of it say (shockingly, highly similar to the things AI boosters say e.g adopt it or you'll be left behind, it's The Future etc)

                  The fact is, culture plays a big part in whether a technology lives or dies, and culture is created by all of us, individually and collectively - so saying "No" to AI has power. It is not only the state that has the power to regulate behavior. Social expectation is incredibly powerful as well. If using an AI summary gets you looks of disgust across the meeting table, that is a strong incentive to not do it again.

                  I'm truly tired of the 'It's inevitable, get used to it' messaging, and so very very unimpressed by the idea that our governments, which are transparently in thrall to billionaires, are going to protect us from the misuse of this tech. *We* protect us from it, and we do it by making its use unpopular and unprofitable through our actions.

                  Chris TrottierA 1 Reply Last reply
                  0
                  • 90s Script Kiddie9 90s Script Kiddie

                    @atomicpoet "OpenAI already has 800M users which is a faster adoption rate than any other online service—ever."

                    And yet OpenAI isn't profitable, and isn't projected to be for years, and their flagship product is constantly plagued with the worst kinds of marketing snafus. ChatGPT encouraging delusional messianic thoughts in the mentally ill, playing an active part in preventing a teen with suicidal ideation from getting help and directly encouraging their suicide, and more mundanely simply being unable to do basic counting, math, and deductive reasoning (which it will never be able to do). All that, and the fundamental flaw of hallucination - which there is no solution for and makes LLMs essentially unsuitable for any kind of mission critical work.

                    Then there's the unavoidable fact that almost every single AI pilot in the corporate world is failing, and that most organizations that adopt AI tools have to mandate their use because people simply don't like them or want them. And "not being liked or wanted" is, sorry to say, not typically a feature of world-changing tech that will achieve universal adoption.

                    Then, there is the growing consensus that the current AI craze is an economic bubble, which could potentially collapse and cause a recession. Also not typical of inevitably world-changing technology. More typical of the dot com boom, or the NFT craze.

                    Then there is the history of AI hype and investment, which has swung between summer and winter throughout the information age. Most human cultural things do this. "Endless AI Summer" is, in the context of known history, unlikely.

                    Finally, I disagree with your "the genie is out of the bottle" assertion as evidence that we all need to get used to some theoretical new reality. Plenty of technologies fail in the marketplace and are thereafter never seen again or restricted to specialized functions and industries. Supersonic aircraft, gas turbine engines, crypto (which found success as an investment vehicle and money laundering tool but has literally no other utility and no impact of most people's everyday lives despite the things boosters of it say (shockingly, highly similar to the things AI boosters say e.g adopt it or you'll be left behind, it's The Future etc)

                    The fact is, culture plays a big part in whether a technology lives or dies, and culture is created by all of us, individually and collectively - so saying "No" to AI has power. It is not only the state that has the power to regulate behavior. Social expectation is incredibly powerful as well. If using an AI summary gets you looks of disgust across the meeting table, that is a strong incentive to not do it again.

                    I'm truly tired of the 'It's inevitable, get used to it' messaging, and so very very unimpressed by the idea that our governments, which are transparently in thrall to billionaires, are going to protect us from the misuse of this tech. *We* protect us from it, and we do it by making its use unpopular and unprofitable through our actions.

                    Chris TrottierA This user is from outside of this forum
                    Chris TrottierA This user is from outside of this forum
                    Chris Trottier
                    wrote on last edited by
                    #9

                    90s Script Kiddie Look, the profitability thing doesn’t impress me as an argument. Facebook wasn’t profitable for years either, yet here we all are doomscrolling.

                    Sometimes adoption comes first, business model later. Maybe OpenAI figures it out, maybe it doesn’t. If you’re convinced it never will, fantastic. That’s a life-changing short position waiting to happen. And I genuinely hope you get rich off it.

                    But the idea that cultural side-eye will stop people from using AI? Come on. If peer pressure worked, vaping wouldn’t exist, SUVs wouldn’t exist, and TikTok would have died in beta. Usage keeps climbing because people find it useful, not because Sam Altman is hypnotizing the masses.

                    And yes, governments will be slow to regulate. On that we agree. Which means the “we’ll stop it by collectively rejecting it” strategy feels more like a vibe than a plan. So if we’re ruling out regulation, and peer pressure isn’t doing much, what exactly is the realistic path forward here?

                    90s Script Kiddie9 1 Reply Last reply
                    0
                    • Chris TrottierA Chris Trottier

                      90s Script Kiddie Look, the profitability thing doesn’t impress me as an argument. Facebook wasn’t profitable for years either, yet here we all are doomscrolling.

                      Sometimes adoption comes first, business model later. Maybe OpenAI figures it out, maybe it doesn’t. If you’re convinced it never will, fantastic. That’s a life-changing short position waiting to happen. And I genuinely hope you get rich off it.

                      But the idea that cultural side-eye will stop people from using AI? Come on. If peer pressure worked, vaping wouldn’t exist, SUVs wouldn’t exist, and TikTok would have died in beta. Usage keeps climbing because people find it useful, not because Sam Altman is hypnotizing the masses.

                      And yes, governments will be slow to regulate. On that we agree. Which means the “we’ll stop it by collectively rejecting it” strategy feels more like a vibe than a plan. So if we’re ruling out regulation, and peer pressure isn’t doing much, what exactly is the realistic path forward here?

                      90s Script Kiddie9 This user is from outside of this forum
                      90s Script Kiddie9 This user is from outside of this forum
                      90s Script Kiddie
                      wrote on last edited by
                      #10

                      @atomicpoet I lost a longer post wherein I detailed all of the ways LLMs have shown themselves to be unreliable technology that's incapable of cause and effect reasoning. Can't be arsed to re-write it but, short version, these things have routinely given people advice that if followed, could or did kill them, don't know the difference between information they have in their training data and information they invented, and due to their fundamentel architecture will never be capable of anything more than generating convincingly lifelike language. A computer program that could reason would be a game changer. That is not what LLMs are, but it is what they are being marketed as. As more and more high profile failures occur, and as these companies *continue* to not be profitable that is going to become apparent even to the tech-illiterate C-Suites that have drunk the kool-aid.

                      I've lived through a couple of technological sea changes. The internet becoming ubiquitous, computers getting tiny... those were things that literally changed the world and *nobody* had to say "You better get a smartphone or you'll be left behind" it was self-evident. *Nobody* had to mandate the use of tablets in business. People bought their own and used them. People actually *wanted* these things because they had actual uses. They *actually* saved time. LLMs pretend to save time. Sure, you summarized hundreds of pages of text down to their salient points in a few seconds, except one of those points is a lie and you don't know which one. You prototyped a software project in an hour, but it's riddled with security vulnerabilites and moreover, is unmaintainable because you can't explain code you didn't write to a colleague. LLMs as they exist today are snake oil, plain and simple, and no amount of new data centres is going to change that. There is no "there" there. It's hype.

                      So, agree to disagree that the tech is revolutionary. Brass tacks, nuts and bolts, it isn't. Machine learning and the ability to easily discern patterns in data, that's a game-changer of a sort - except it's been around a long time, it's not new. The new hotness is LLMs, and frankly, they suck. They're flashy, they can pull off some neat tricks, but there's no killer app, because the fundamental flaws are always gonna getcha. Billions of dollars and probably hundreds of man-years went into GPT-5 and it still couldn't accurately count the number of Bs in the word blueberry. It just doesn't pass the smell test. Yes, lots of people are excited and jumping on board, but a reckoning is coming.

                      Also agree to disagree that culture matters. What others think of us is humanity's great obsession. We invented morality and taboos. The universe is indifferent if you punch your AI-loving boss in the face, but it is *very* socially frowned upon and because of that, there are consequences for doing it. Culture doesn't come from nowhere, it emerges from history, our subjective embodied experience, the collective unconscious... and all those things are subject to change when humans put their minds to it. Lots of things have been "inevitable" - monarchy, slavery... we changed them by resisting them, by shouting about them, and in some cases beheading the folks responsible for them. Culture, what people say and do and our memes (I do not mean internet memes I mean actual viral ideas) matter a great deal.

                      As for how to handle things going forward? My personal plan is to resist AI in every business context it is introduced to me in. I will do everything in my personal power to spread knowledge about the human and environmental costs of this technology, and about how it cannot be trusted for mission critical tasks. I will share the studies that show it doesn't increase profitabilty and the articles about the suicides, poisonings, and deepfake porn videos in high schools that are enabled by giving AI companies money. I will share the analyses that show this is an economic bubble. In short, I will do my damndest to make it unpopular and I will try to convince others to do the same.

                      Chris TrottierA 1 Reply Last reply
                      0
                      • 90s Script Kiddie9 90s Script Kiddie

                        @atomicpoet I lost a longer post wherein I detailed all of the ways LLMs have shown themselves to be unreliable technology that's incapable of cause and effect reasoning. Can't be arsed to re-write it but, short version, these things have routinely given people advice that if followed, could or did kill them, don't know the difference between information they have in their training data and information they invented, and due to their fundamentel architecture will never be capable of anything more than generating convincingly lifelike language. A computer program that could reason would be a game changer. That is not what LLMs are, but it is what they are being marketed as. As more and more high profile failures occur, and as these companies *continue* to not be profitable that is going to become apparent even to the tech-illiterate C-Suites that have drunk the kool-aid.

                        I've lived through a couple of technological sea changes. The internet becoming ubiquitous, computers getting tiny... those were things that literally changed the world and *nobody* had to say "You better get a smartphone or you'll be left behind" it was self-evident. *Nobody* had to mandate the use of tablets in business. People bought their own and used them. People actually *wanted* these things because they had actual uses. They *actually* saved time. LLMs pretend to save time. Sure, you summarized hundreds of pages of text down to their salient points in a few seconds, except one of those points is a lie and you don't know which one. You prototyped a software project in an hour, but it's riddled with security vulnerabilites and moreover, is unmaintainable because you can't explain code you didn't write to a colleague. LLMs as they exist today are snake oil, plain and simple, and no amount of new data centres is going to change that. There is no "there" there. It's hype.

                        So, agree to disagree that the tech is revolutionary. Brass tacks, nuts and bolts, it isn't. Machine learning and the ability to easily discern patterns in data, that's a game-changer of a sort - except it's been around a long time, it's not new. The new hotness is LLMs, and frankly, they suck. They're flashy, they can pull off some neat tricks, but there's no killer app, because the fundamental flaws are always gonna getcha. Billions of dollars and probably hundreds of man-years went into GPT-5 and it still couldn't accurately count the number of Bs in the word blueberry. It just doesn't pass the smell test. Yes, lots of people are excited and jumping on board, but a reckoning is coming.

                        Also agree to disagree that culture matters. What others think of us is humanity's great obsession. We invented morality and taboos. The universe is indifferent if you punch your AI-loving boss in the face, but it is *very* socially frowned upon and because of that, there are consequences for doing it. Culture doesn't come from nowhere, it emerges from history, our subjective embodied experience, the collective unconscious... and all those things are subject to change when humans put their minds to it. Lots of things have been "inevitable" - monarchy, slavery... we changed them by resisting them, by shouting about them, and in some cases beheading the folks responsible for them. Culture, what people say and do and our memes (I do not mean internet memes I mean actual viral ideas) matter a great deal.

                        As for how to handle things going forward? My personal plan is to resist AI in every business context it is introduced to me in. I will do everything in my personal power to spread knowledge about the human and environmental costs of this technology, and about how it cannot be trusted for mission critical tasks. I will share the studies that show it doesn't increase profitabilty and the articles about the suicides, poisonings, and deepfake porn videos in high schools that are enabled by giving AI companies money. I will share the analyses that show this is an economic bubble. In short, I will do my damndest to make it unpopular and I will try to convince others to do the same.

                        Chris TrottierA This user is from outside of this forum
                        Chris TrottierA This user is from outside of this forum
                        Chris Trottier
                        wrote on last edited by
                        #11

                        90s Script Kiddie I appreciate the depth here, genuinely. And just to reset the frame—I’m not cheerleading AI. I’m not convinced LLMs are revolutionary, intellectually coherent, or even durable. They may absolutely collapse under the weight of their own hype.

                        My point is simpler: history is full of world-changing tech that did not look like it at the start. The internet was compared to a fax machine. The iPhone was mocked as a toy. The iPad was declared pointless. Adoption didn’t happen because they were perfect—it happened because they got incrementally less terrible, and culture adapted around them.

                        Right now, AI is spreading despite fear, not because of euphoria. Hundreds of millions of people are using it. Could that be a fad? Totally. But what if it isn’t? What if peer pressure, moral arguments, and personal refusal don’t slow it down?

                        That’s the uncertainty I’m sitting with—not inevitability, not hype. Just the possibility that “this sucks” and “this will stick around anyway” can both be true.

                        And in that scenario, what’s the strategy? How do we shape ownership, governance, labour policy, safety, accountability, environmental costs? Because “I hope everyone stops using it” is more of a wish than a plan.

                        I’m not speaking in certainties. I’m speaking in contingencies.

                        1 Reply Last reply
                        0

                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Login or register to search.
                        Powered by NodeBB Contributors
                        • First post
                          Last post