From Black Box to White Box

From Black Box to White Box
February 26, 2020 Virtual Consulting

By Gideon Malherbe, VCI Founding Partner

A big problem with machine learning is the notion that it’s a black box. The production or logistics process is fully “sensored” up and the black box tells operators what to do. There is no understanding nor clarity about the logic behind the instructions.

Many developers are now designing new machine learning algorithms that are non- linear and highly accurate, but also directly interpretable; and interpretable as a term has become more readily associated with these new models.

What makes this evolution in machine learning critically important for the production supervisor is that supervision by a human remains on top of the decision pyramid; and in no instance does she defer production choices to a black box solution.

The transparency of algorithms and retention of decision rights in a machine learning environment is now a white box environment.

Of interest to us here is understanding fairness. Fairness techniques refer to the believability of ML outcomes and the communication designed in such a way that the user intrinsically develops a trust relationship with the ML avatar.

Our relationships with Siri or Alexa are good examples of the struggle for avatars to develop fair and trustworthy relationships with humans. In our operations centers, the challenge is not only to develop the appropriate self-learning algorithms, but to ensure that our ML and AI bots are relatable to the production teams, so that all parties can keep on learning.

We can’t force people to work with bad ML constructs; we can design our operating centers such that people like working there.

So again, this leads us back to people, process, technology as equal elements in our design of near-autonomous production systems.

 

For more information or to schedule a complimentary consultation, contact gideon@govci.com