Whose fault is it if an AI algorithm makes a decision that causes harm? How should fault be identified and apportioned? What sort of remedy should be imposed? What measures can be taken to ensure that the same mistake will not be repeated in the future? When that occurs, questions of attribution and remedies will arise. While some of those failures may be benign, others could result in harm to persons or property. ![]() Given the volume of products and services that will incorporate AI, the laws of statistics ensure that-even if AI does the right thing nearly all the time-there will be instances where it fails. An AI-enabled robotic surgery tool might take an action during an operation that results in avoidable harm to a patient. An AI-driven algorithm used to evaluate mortgage applications might make decisions that are biased by consideration of impermissible factors, such as race. A driverless car might fail to avoid an accident that later analysis shows was preventable. ![]() Put simply, AI systems will sometimes make mistakes.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |