ChatGPT calculated taxing U.S. billionaires at the 22% rate of average Texans would generate $161 billion to $1.37 trillion annually depending on how you tax.

Edit: I posted this because I would imagine LLMs to have readily accessible figures that random reporters may not.

  • Edit: I posted this because I would imagine LLMs to have readily accessible figures that random reporters may not.

    LLMs have the same information that a reporter can find online or other public records, but will more often than not conflate separate things into new, inaccurate info or straight up make shit up that sounds accurate to someone who doesn’t know anything about the subject.

  • Kronusdark@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    3
    ·
    2 days ago

    So, we are citing ChatGPT for news now?

    And not even trying to hide the fact?

    Jesus. We’re cooked.

    • Keilik@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      2 days ago

      I’m with you on that one, I mean it really wouldn’t be that hard to just… do the math on this, especially if it’s your job to write articles like this.

      Maybe the phrase “we asked chatGPT” is the new hotness in clickbait but fuck me if it isn’t depressing.

    • santa@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 day ago

      ChatGPT was cited as source. That’s not terrible. To not double-check those numbers is not great, but we aren’t doomed because it was cited. We would be doomed if this became the norm.

      • Kronusdark@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        If I can’t cite Wikipedia as a source in a school paper, journalists shouldn’t be allowed to cite ChatGPT for an article.

        Research is part of their job.

  • EvilBit@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    1 day ago

    Big brain move, asking an LLM to do economics math. They’re large LANGUAGE models, not large MATHEMATICAL models. They forgo rigorous methodologies in favor of stochastically predicting correlations based on intractably large datasets, which can lead to such impressive mathematical feats as failing to count to three. Great job, Yahoo.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      6 hours ago

      Seriously. My sister was the one who believed ChatGPT can think and do calculus or math. I gave a rather simple but definitely unique math task. At first I put it into google search calculator and showed the result, which was correct. Then I gave the same exact thing to ChatGPT and the result was completely different and false.

      • EvilBit@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        My favorite example was when they gave all the top LLMs a task to figure out how many circles in an image were touching. Every single one failed miserably at this task that would have been trivial for a 3 year old.

        Unless the answer was five. Because of the Olympics logo.

  • Pfeffy@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    21 hours ago

    I don’t understand why I still have to see posts from this user who I’ve blocked over and over for months.