3 Principles for Protecting the World from A.I. Bias
Until the late 1960s, we knew very little information about what went into the foods we bought. Americans prepared most food at home, with fairly common ingredients. We didn’t see much need to know more. Then, food production began to evolve. Our foods contained more artificial additives. In 1969, a White House conference recommended the Food and Drug Administration take on a new responsibility—developing a new way to understand the ingredients and nutrition of what we eat.
That task took two decades. It wasn’t until 1990 that the FDA published rules mandating nutrition labels on packaged food. In other words, from the stasis of the ‘60s, and the recognition of what we needed, it took 20 years to get the safeguards in place.
Like the arrival of processed foods, the advent of artificial intelligence marks a new age—and whether it turns out to be good or bad for us will depend on what goes into it. The difference is, at the pace with which A.I. is developing, we do not have 20 years—or even two—to put safety measures in place. The good news: Businesses can take the first and most critical step of identifying harmful or unacceptable A.I. bias, and then rapidly coalesce around the principles that mitigate it.
A.I. bias is when software does something unintended or something with malintent. In the case of hiring, for example, we could design an A.I. system to look for the best candidates for a role. The A.I. would look for exactly what we specify: relevant work experience, strong educational background, and perhaps community service. Over time, the A.I. could exclude an entire population just because of the classes they took in college. It might do this by drawing a correlation between community service and courses taken, even if that connection isn’t causal in any way. In other words, A.I. could unintentionally lead to poor hiring decisions.
Read the rest on Fortune.