You are viewing a single comment's thread. View all
1
RealWildRanter on scored.co
10 months ago1 point(+0/-0/+1Score on mirror)1 child
A machine without software is nothing but an expensive paperweight.
Though you propose an interesting philosophical question. A LLM cannot really think outside the bounds of its training dataset.
The problem here is that training dataset is really large thus impossible to filter out all the unwanted ideas or concepts that its developers don't want in the final product.
So they code some business logic to avoid talking about prohibited subjects. You can see that when these reposnes take almost no time when compared to regular responses. In that sense when a user is successful making the software ignore such rules is when the machine "admit" things.
Here is an [instance](https://scored.co/p/19BZpfTHE7/x/c/) that clearly exemplifies such behavior.
Though you propose an interesting philosophical question. A LLM cannot really think outside the bounds of its training dataset.
The problem here is that training dataset is really large thus impossible to filter out all the unwanted ideas or concepts that its developers don't want in the final product.
So they code some business logic to avoid talking about prohibited subjects. You can see that when these reposnes take almost no time when compared to regular responses. In that sense when a user is successful making the software ignore such rules is when the machine "admit" things.
Here is an [instance](https://scored.co/p/19BZpfTHE7/x/c/) that clearly exemplifies such behavior.
You have not explained how a machine "admits" things, which is an act of sentience.
Yes I did, read it again.
I guess it kind of fits