Skip to content
0
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
  • Home
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Sketchy)
  • No Skin
Collapse

Wandering Adventure Party

  1. Home
  2. Uncategorized
  3. A practical question that's come up in Arcalibre development: how should we handle AI-vulnerable dependencies?

A practical question that's come up in Arcalibre development: how should we handle AI-vulnerable dependencies?

Scheduled Pinned Locked Moved Uncategorized
8 Posts 3 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Cassandra Granade 🏳️‍⚧️X This user is from outside of this forum
    Cassandra Granade 🏳️‍⚧️X This user is from outside of this forum
    Cassandra Granade 🏳️‍⚧️
    wrote last edited by
    #1

    A practical question that's come up in Arcalibre development: how should we handle AI-vulnerable dependencies?

    Link Preview Image
    What should be done about dependencies that turn AI-vulnerable ? - rereading Forums

    Hi. I’m Damien, a contributor to the project currently working on the pydofo library that’s meant to replace Calibre’s PoDoFo binding. Upon interacting with upstream to file a bug report [https://github.com/podofo/podofo/issues/318], I found out that the developer and maintainer of the library is experimenting with using Copilot [https://github.com/search?q=repo%3Apodofo%2Fpodofo+copilot&type=issues] to author PRs [https://github.com/search?q=repo%3Apodofo%2Fpodofo+copilot&type=pullrequests]. Per cgranade’s taxonomy, this means that unless the maintainer changes his(?) mind, the project is on track to become AI-vulnerable. So what am I to do about it ? One one hand, this seems very much contrary to the purpose and the ethos of the rereading project. On the other hand, the landscape looks grim and complete isolation from AI dependencies looks like an increasingly hard problem, the situation may repeat for any dependency at any point in time. What do you think should be the course of action here ? — Update: the exchange on the github issue went a bit further and the maintainer clarified its position and “experiments”. https://github.com/podofo/podofo/issues/318#issuecomment-3967036898 [https://github.com/podofo/podofo/issues/318#issuecomment-3967036898] > In this case it would have been an experiment only. As long as I am the maintainer, I guarantee this library will be free of AI slop. […] I’m not intellectually satisfied by the response, but I guess that exchange can be taken at face value and provides some sort of policy for AI contributions to the library. The issue of having a policy for handling such cases in the rereading Project still persists.

    favicon

    (forums.rereading.space)

    NelsonS Cassandra Granade 🏳️‍⚧️X 2 Replies Last reply
    0
    • Cassandra Granade 🏳️‍⚧️X Cassandra Granade 🏳️‍⚧️

      A practical question that's come up in Arcalibre development: how should we handle AI-vulnerable dependencies?

      Link Preview Image
      What should be done about dependencies that turn AI-vulnerable ? - rereading Forums

      Hi. I’m Damien, a contributor to the project currently working on the pydofo library that’s meant to replace Calibre’s PoDoFo binding. Upon interacting with upstream to file a bug report [https://github.com/podofo/podofo/issues/318], I found out that the developer and maintainer of the library is experimenting with using Copilot [https://github.com/search?q=repo%3Apodofo%2Fpodofo+copilot&type=issues] to author PRs [https://github.com/search?q=repo%3Apodofo%2Fpodofo+copilot&type=pullrequests]. Per cgranade’s taxonomy, this means that unless the maintainer changes his(?) mind, the project is on track to become AI-vulnerable. So what am I to do about it ? One one hand, this seems very much contrary to the purpose and the ethos of the rereading project. On the other hand, the landscape looks grim and complete isolation from AI dependencies looks like an increasingly hard problem, the situation may repeat for any dependency at any point in time. What do you think should be the course of action here ? — Update: the exchange on the github issue went a bit further and the maintainer clarified its position and “experiments”. https://github.com/podofo/podofo/issues/318#issuecomment-3967036898 [https://github.com/podofo/podofo/issues/318#issuecomment-3967036898] > In this case it would have been an experiment only. As long as I am the maintainer, I guarantee this library will be free of AI slop. […] I’m not intellectually satisfied by the response, but I guess that exchange can be taken at face value and provides some sort of policy for AI contributions to the library. The issue of having a policy for handling such cases in the rereading Project still persists.

      favicon

      (forums.rereading.space)

      NelsonS This user is from outside of this forum
      NelsonS This user is from outside of this forum
      Nelson
      wrote last edited by
      #2

      @xgranade We need a movement to preserve a full LLM-free stack. Otherwise I think people will give up on evicting LLMs when they realize that even if their project is LLM-free, everything above or below them in the stack is infected.

      Sadly I still think a major problem for any project to avoid LLMs is it will be irresistible to LLMs, since it helps them avoid model collapse (and degradation before reaching that point). Avoiding helping LLMs by marking LLM-free code is an unsolved issue.

      1 Reply Last reply
      1
      0
      • Cassandra Granade 🏳️‍⚧️X Cassandra Granade 🏳️‍⚧️

        A practical question that's come up in Arcalibre development: how should we handle AI-vulnerable dependencies?

        Link Preview Image
        What should be done about dependencies that turn AI-vulnerable ? - rereading Forums

        Hi. I’m Damien, a contributor to the project currently working on the pydofo library that’s meant to replace Calibre’s PoDoFo binding. Upon interacting with upstream to file a bug report [https://github.com/podofo/podofo/issues/318], I found out that the developer and maintainer of the library is experimenting with using Copilot [https://github.com/search?q=repo%3Apodofo%2Fpodofo+copilot&type=issues] to author PRs [https://github.com/search?q=repo%3Apodofo%2Fpodofo+copilot&type=pullrequests]. Per cgranade’s taxonomy, this means that unless the maintainer changes his(?) mind, the project is on track to become AI-vulnerable. So what am I to do about it ? One one hand, this seems very much contrary to the purpose and the ethos of the rereading project. On the other hand, the landscape looks grim and complete isolation from AI dependencies looks like an increasingly hard problem, the situation may repeat for any dependency at any point in time. What do you think should be the course of action here ? — Update: the exchange on the github issue went a bit further and the maintainer clarified its position and “experiments”. https://github.com/podofo/podofo/issues/318#issuecomment-3967036898 [https://github.com/podofo/podofo/issues/318#issuecomment-3967036898] > In this case it would have been an experiment only. As long as I am the maintainer, I guarantee this library will be free of AI slop. […] I’m not intellectually satisfied by the response, but I guess that exchange can be taken at face value and provides some sort of policy for AI contributions to the library. The issue of having a policy for handling such cases in the rereading Project still persists.

        favicon

        (forums.rereading.space)

        Cassandra Granade 🏳️‍⚧️X This user is from outside of this forum
        Cassandra Granade 🏳️‍⚧️X This user is from outside of this forum
        Cassandra Granade 🏳️‍⚧️
        wrote last edited by
        #3

        Even if one sets everything about ethics aside (something that one should very much not do, but give me a moment), this is a serious practical question. There's a base assumption of fitness for purpose that goes into taking a dependency on code, an assumption that cannot be held for code extruded by an LLM *no matter how careful the code review is*.

        But also, it's probably not possible at this point to avoid all LLMed code in a dependency chain, so the question remains: what *do* you do?

        Cassandra Granade 🏳️‍⚧️X NelsonS ✧✦Catherine✦✧W 3 Replies Last reply
        0
        • Cassandra Granade 🏳️‍⚧️X Cassandra Granade 🏳️‍⚧️

          Even if one sets everything about ethics aside (something that one should very much not do, but give me a moment), this is a serious practical question. There's a base assumption of fitness for purpose that goes into taking a dependency on code, an assumption that cannot be held for code extruded by an LLM *no matter how careful the code review is*.

          But also, it's probably not possible at this point to avoid all LLMed code in a dependency chain, so the question remains: what *do* you do?

          Cassandra Granade 🏳️‍⚧️X This user is from outside of this forum
          Cassandra Granade 🏳️‍⚧️X This user is from outside of this forum
          Cassandra Granade 🏳️‍⚧️
          wrote last edited by
          #4

          The other day, I almost fell for a doctored, AI generated image. It wasn't that the image was convincing in the details, or that it was some way a really good fake. It's that social media presents you with a lot of images, and I generally rely on marginal trust in the people I follow and the people that they boost, such that I don't necessarily look at the details of every single image.

          Cassandra Granade 🏳️‍⚧️X 1 Reply Last reply
          0
          • Cassandra Granade 🏳️‍⚧️X Cassandra Granade 🏳️‍⚧️

            Even if one sets everything about ethics aside (something that one should very much not do, but give me a moment), this is a serious practical question. There's a base assumption of fitness for purpose that goes into taking a dependency on code, an assumption that cannot be held for code extruded by an LLM *no matter how careful the code review is*.

            But also, it's probably not possible at this point to avoid all LLMed code in a dependency chain, so the question remains: what *do* you do?

            NelsonS This user is from outside of this forum
            NelsonS This user is from outside of this forum
            Nelson
            wrote last edited by
            #5

            @xgranade Why would you trust your own code if you know you can't trust your dependencies? I don't see how you can accept slop dependencies and avoid despair.

            I think you just have to triage, adopt your most important dependencies, and see if you can build community / solidarity with others to handle the rest.

            Cassandra Granade 🏳️‍⚧️X 1 Reply Last reply
            0
            • Cassandra Granade 🏳️‍⚧️X Cassandra Granade 🏳️‍⚧️

              The other day, I almost fell for a doctored, AI generated image. It wasn't that the image was convincing in the details, or that it was some way a really good fake. It's that social media presents you with a lot of images, and I generally rely on marginal trust in the people I follow and the people that they boost, such that I don't necessarily look at the details of every single image.

              Cassandra Granade 🏳️‍⚧️X This user is from outside of this forum
              Cassandra Granade 🏳️‍⚧️X This user is from outside of this forum
              Cassandra Granade 🏳️‍⚧️
              wrote last edited by
              #6

              In this case, the extruded images were in the context of critiquing them, but it's still an interesting reminder to me that critical reading sometimes fails just because of the heuristics that I take when reading, assuming that there's another mind behind the stuff that I'm reading — a mind that I can develop a trust relationship with.

              I think of code review as being somewhat similar, though perhaps with a significantly lower default state of trust.

              1 Reply Last reply
              0
              • NelsonS Nelson

                @xgranade Why would you trust your own code if you know you can't trust your dependencies? I don't see how you can accept slop dependencies and avoid despair.

                I think you just have to triage, adopt your most important dependencies, and see if you can build community / solidarity with others to handle the rest.

                Cassandra Granade 🏳️‍⚧️X This user is from outside of this forum
                Cassandra Granade 🏳️‍⚧️X This user is from outside of this forum
                Cassandra Granade 🏳️‍⚧️
                wrote last edited by
                #7

                @skyfaller Yeah, that's where I'm at (and please, if you're willing to, I'd appreciate if you'd share your thoughts on the forum post as well!).

                I brought up the example of Python yesterday, because there's now at least some portion of the Python interpreter — a tiny one, to be clear and to be fair — that I cannot trust solely by trusting the professionals who developed and reviewed it. I don't see that problem getting smaller.

                1 Reply Last reply
                0
                • Cassandra Granade 🏳️‍⚧️X Cassandra Granade 🏳️‍⚧️

                  Even if one sets everything about ethics aside (something that one should very much not do, but give me a moment), this is a serious practical question. There's a base assumption of fitness for purpose that goes into taking a dependency on code, an assumption that cannot be held for code extruded by an LLM *no matter how careful the code review is*.

                  But also, it's probably not possible at this point to avoid all LLMed code in a dependency chain, so the question remains: what *do* you do?

                  ✧✦Catherine✦✧W This user is from outside of this forum
                  ✧✦Catherine✦✧W This user is from outside of this forum
                  ✧✦Catherine✦✧
                  wrote last edited by
                  #8

                  @xgranade i routinely filter out unfit-for-purpose libraries from my dependencies (including those who look good at first), and have been for years before the AI boom; the oft-recounted argument that it is "exactly the same as before" does not hold because of the degree, but the fundamentals of "read the code of your fucking dependencies and reconstruct a theory of mind to see if there was even one" continues to hold

                  1 Reply Last reply
                  0
                  • Reilly Spitzfaden (they/them)R Reilly Spitzfaden (they/them) shared this topic

                  Reply
                  • Reply as topic
                  Log in to reply
                  • Oldest to Newest
                  • Newest to Oldest
                  • Most Votes


                  • Login

                  • Login or register to search.
                  Powered by NodeBB Contributors
                  • First post
                    Last post