Quote:
Originally Posted by RedskinRat
JoeRedskin and CRR:
You've yet to explain why emotion needs to be a part of law and justice, you just assert that it should be.
We'll leave it there. If you can't present an argument then we have no discussion, just a disagreement.
|
To be fair, you have yet to demonstrate a computer system in existence which could handle the range of human ethos and reach valid conclusions.
Let's take a red light camera, as this is a very simple go-no go situation.
This is an actual example, not sure emotions specifically come into play but for a computer justice system to work, one would think infallibility would be a critical piece, else it goes to a human arbitrator, and we are back where we started:
DC uses red light cameras. Driver is driving straight, and is stopped at a red light. He realizes he is in a turn only lane so he changes back to the straight lane which has a green light, and goes forward. A ticket comes later, which clearly identifies him as switching and proceeding in a legal manner. So he appeals. The judge overturns the ticket.
Now, in a computerized system you have to explain to me,
A) will a human look at the camera picture and validate the claim that the driver executed a legal maneuver?
- or -
B)will the computer system take the redlight system's data as correct and invalidate the appeal?
if A is your answer, than humans and human bias are still involved, because, maybe the line isn't as clear cut, so ultimately you have a data entry clerk determining whose appeal is valid, and whose isn't.
If B is your answer, than ultimately you will see some atrocities simply because bad data in equals bad result sets.
Now if you are saying in a hypothetical computer system that hasn't been built or conceived yet, but that could render decisions without human input, yet still make those fine detail differences between truth and falsehood, fact and fiction, and deliver exact results, then i would say, build it, test it on a small dataset, while having normal jurisprudence continue, and see where the difference lies.
Skinsguy also makes an excellent point about intuitive responses to if someone is lying. Would the computer system use lie detector results? Again it's answer is only as good as the input given. It can neither think, nor "feel", it's way to a truth. And someone would be inputting what it should think of as it's truth, or valid data set and rules.
In this specific case, how would you imagine about the occurrence is fed into the computer system. Simplistically: Did Defendant 1 shoot Victim 1? yes. Computer says guilty. Then you would have to enter in extenuating circumstances. Who decides which circumstances qualify? All the laws would have to be programmed in to make sure that every exception or possible exclusion is covered, and at some point someone, either machine or human will have to make value decisions about whether an exclusion should or should not be accounted for.
Let's take another case. The OJ murder case. The computer is given as fact a glove was used in the murder. The question is proposed - does the glove fit, for the computer it's a yes no answer. No it did not. Computer finds not guilty. heck Defense Attorneys would now have a field day, as any simple fact that goes outside established parameters would have to yield not guilty rulings. Forget that humans may lie, or tell half truths and someone has to sort through that using emotional and gut feelings.
Finally
Just to bring TV back in because I know you RR get that:
Spock would make a great prosecutor, but I wouldn't want him judging me if I happened to circumvent a rule or two to pass a rigged test.