Discussion about this post

User's avatar
Nick Diakopoulos's avatar

One reason for different expectations for ai systems is that they are engineered and explanations help assign responsibility for improvement to the developers. So if the explanation from the system is that it used a protected attribute in the decision, the developer can be held accountable for that, and for improving the system so it doesnt do that again. Another way to think about it might be that you can’t debug a model if you don’t know why it did the bad thing it did.

Christopher Riesbeck's avatar

Why expect accurate explanations from LLMs when we don't get them from people? Humans frequently say their decisions were not influenced by race or gender, when controlled experiments suggest otherwise. When judging human accountability, we place far more credence on observed communications ("did you not receive a report ...") and patterns of decisions ("why have you never voted to promote a woman...") than claims of thought process.

The LLM generation process has no more access to internal computational processes than we do to how our brains work.

No posts

Ready for more?