According to the release:

Adds experimental PostgreSQL support

The code was written by Cursor and Claude

14,997 added lines of code, and 10,202 lines removed

reviewed and heavily tested over 2-3 weeks

This makes me uneasy, especially as ntfy is an internet facing service. I am now looking for alternatives.

Am I overreacting or do you all share the same concern?

  • d15d@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    28 days ago

    They are not even trusting it themselves. This is from the release notes

    I’ll not instantly switch ntfy.sh over. Instead, I’m kindly asking the community to test the Postgres support and report back to me if things are working

    Fuck that.

  • patrick@lemmy.bestiver.se
    link
    fedilink
    English
    arrow-up
    1
    ·
    28 days ago

    It looks like that tool is more or less built by a single developer (you already trust their judgment anyways!), and even though the code came through in a single PR it was a merge from a branch that had 79 separate commits: https://github.com/binwiederhier/ntfy/pull/1619

    Also glancing through it a bit, huge portions of that are straightforward refactors or even just formatting changes caused by adding a new backend option.

    I’m not going to say it’s fine, but they didn’t just throw Claude at a problem and let it rewrite 25k lines of code unnecessarily.

    • mudkip@lemdro.id
      link
      fedilink
      English
      arrow-up
      1
      ·
      27 days ago

      Any AI usage immediately discredits the software for me, because it calls into question all of their past and future work.

        • mudkip@lemdro.id
          link
          fedilink
          English
          arrow-up
          0
          ·
          27 days ago

          Linus sent an email recently to the Kernel Mailing List trashing AI slop and rejecting AI generated patches. The fact that he used it to play around with a script doesn’t invalidate the fact that he distrusts code written by LLMs when it actually matters.

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    28 days ago

    Definitely share your initial concern. Without strong review processes to ensure that every line of code follows the intent of the human developer, there’s no way of knowing what exactly is in there and the implications for the human users. And I’m not just talking about bugs.

    They say it’s reviewed, but the temptation to blindly trust is there. In this case, developer appears to have taken some care.

    The code was written by Cursor and Claude, but reviewed and heavily tested over 2-3 weeks by me. I created comparison documents, went through all queries multiple times and reviewed the logic over and over again. I also did load tests and manual regression tests, which took lots of evenings.

    Let us hope so. Handle with care to ensure responsibility is not offloaded to a machine instead of a person.

    • NoFun4You@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      Like ppl thinking skilled engineers cannot vet AI output. AI is pretty good for programming.

      • Ohi@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        26 days ago

        You’re absolutely right, and the vast majority of people on this platform seem to get offended by anything AI related. Software engineers have been reviewing code made by other people since the dawn of the craft. Guess what y’all, AI generated code looks exactly the same, if not better on the first pass at creating a thing.

        Down vote me all you want homies. You’re living in a fantasy if you think all AI is slop. Sure, I can see how it’s ruining some content on the Internet, but for code related tasks, its going to dramatically change the world for the better.

        • MerryJaneDoe@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          I think you would need to first make the case that software is making the world a better place. So far, it’s got a spotty record…

          • Ohi@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            The same thing happened to music when GarageBand and similar tools lowered the effort required to produce quality tracks. It took power away from the old gatekeepers and gave it to people with ideas but not traditional access. AI is doing that to software now.

  • notabot@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    28 days ago

    I’m assuming this is some sort of canary message to indicate that the code base has been compromised, the author can’t talk about it, and everyone should immediately stop using the service. Surely no-one would be unwise enough to commit this otherwise?

    Even ignoring the huge red LLM flag, a 25kLOC delta in a single PR should be cause for instant rejection as there’s no way to fully understand or test it, let alone in 2-3 weeks.