• FiniteBanjo@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    21 days ago

    LLM and ML generated translations generate a series of tokens individually. That’s why AI Chatbots hallucinate so often, they decide the next most likely word in a sequence is “No” when the correct answer would be “Yes” and then the rest of the prompt devolves into convincing nonsense. Machines are incapable of any sort of critical thinking to discern correct from incorrect to decide whether to use contextual responses.

    • stephen01king@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 days ago

      Those are not examples, just what you claim will happen based on what you think you understand about how LLM works.

      Show me examples of what you meant. Just run some translations in their AI translator or something and show me how often they make inaccurate translations. Doesn’t seem that hard to prove what you claimed.