ApexHunter@lemmy.mltoTechnology@lemmy.world•A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.English
5·
3 months agoThey’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.
Using an llm as a fact generating chatbot is actually a misuse. But they were trained on such a large dataset and have such a large number of parameters (175 billion!?) that they passably perform in that role… which is, at its core, to fill in a call+response pattern in a conversation.
At a fundamental level it will never ever generate factually correct answers 100% of the time. That it generates correct answers > 50% of the time is actually quite a marvel.
That’s how git works. Every file and subfolder under the repo’s root folder belongs to the repo.