ARCHIVED CONTENT
You are viewing ARCHIVED CONTENT released online between 1 April 2010 and 24 August 2018 or content that has been selectively archived and is no longer active. Content in this archive is NOT UPDATED, and links may not function.By Michael Simon
Our modern lives are filled with black boxes, things that we understand in terms of the inputs they require (click the mouse, turn the wheel, insert slice of bread) and the outputs we receive (your computer beeps, your car turns, you get toast!). Yet between the input and the output there are a whole bunch of things happening that we can’t see, can’t explain, and – most importantly – don’t actually need to explain to accomplish our desired task. As long as the inputs are understandable and the outputs are what we expect, what lies in between can be completely opaque. I don’t need to know how my toaster works, as long as I get my toast.
So why is the fact that machine learning (a/k/a “predictive coding”) is a black box such a problem? Is it because human review of documents (i.e., an eyes-on-all-docs full review) is somehow more transparent? Of course not. We have study after study of the greater accuracy and effectiveness of review assisted by machine learning (when used properly).
Read the complete article at: Your ROI Is Coming Out of my Pocket