Skip to content

Artificial Intelligence to support cognitive laziness

Back in the ancient times of Greek heyday the now-Latin “deus ex machina” (god in the machine) was a concept that was used in theatre to bring an easy solution to a difficult situation with no hope for quick resolve: The playwright got lost in his own architecture of drama? No worries, just bring in a god, let him fly in from high above and the situation is solved in no time.

As easy as that. Compared to this, the cutting of the Gordian knot appears as an almost too complex solution — at least one that is not as universally applicable as the deus ex machina concept.

Today, with all the hype for Artificial Intelligence, or AI, I get that “deus ex machina” feeling quite often:

Yesterday I read the transcript of an interview of Recode’s belligerent Kara Swisher with Reid Hoffmann, a Silicon Valley legend (PayPal Mafia, LinkedIn founder, founding partner at Greylock Ventures), needless to say. The topic was about fake news and all the bad stuff that comes with social media. Kara promised to be “real tough” and go into a “real fight” with Reid. And the topic is definitely one that allows enlightening debate and argumentation. However, the questions plus answers were really dull, disappointing, and of poor cognitive depth.

If you really want to mull through a transcript of bad and non-fluent language, seek no further:

The one thing that really struck me was when stumbling over Reid’s comment on how to deal with untrue messages and inappropriate content on social networks:

I think one could argue that they’re doing it a little too slowly, although as long as it’s resolute and you’re fixing it, I think that’s important.

I think ones of the things … And so people are talking about, can AI technology help with this stuff?

Sure, text classification algorithms can help (which are part of so-called AI technology). But defining what is right or wrong, appropriate or inappropriate is a fine line that is difficult even for humans and requires true understanding of semantics (which machines do not have; just read about the Chinese Room Argument, well-known in academic circles dealing with the subject matter). But I don’t want to get deeper into technical details.

The point is that even highly respected thought leaders refer to “AI technology will fix it” way too often instead of really analyzing the problem’s structure and finding a less-trivial but more thoughtful response.

The problem permeates all levels of technology bravado and merry followership (not just the thought leaders are infected): Last year I had the pleasure to serve as jury for one of Munich’s technology business clubs’ major digital challenge. There was a competition announced and groups of smarty-pants university kids (mind that Munich’s two universities boast a pretty good reputation) had to come up with concepts on how to put AI to use.

The result was sobering. Whatever problem they would tackle, the solution was already there: Just AI, omnipotent and omniscient AI. No deep thought, no thinking harder, because AI is that simple box you can throw any crap into and it will return gold.

When I made them aware of these thoughts that came to my mind and that AI is not that deus ex machina box, I was faced with utter lack of understanding.

The whole situation, though on a smaller scale, reminds me a little of “Big Data” and NoSQL databases: Not being forced to rigid relational database schemata and the doubtful bliss of “N=all” (collect all data, not just taking samples and structural subsets), I’ve seen many companies merrily embrace this approach as an excuse to think: All too often a so-called “data lake” was built and all data just dumped into it. Whereas in the good old times (yes, they have always been better; this is a thing that will never change throughout history), people would sit down, put on their thinking hats and think first: What are the problems that my datamart/ warehouse intends to solve? Which data do I need? In which granularity and structure? All gone. Deus ex machina works for data, too.

To conclude, the point that I want to make is that the virtue of deep thought is way too often traded for seemingly simple technology solutions. In particular AI. Don’t get me wrong, I am a big fan and ardent lover of AI. It has been my major field of research during my PhD and PostDoc times.

I just believe that, for one part, expectations are grossly overloaded, and, for the second part, as mentioned before, people stop seeking for proper solutions and just switch into lean-back mode.

“AI will fix it”.

However, I rather reckon that “(AI) winter is coming”. It would not be the first time. We’ve seen some AI winters before.

Like what you read? Be informed about new posts via email:

Leave a Reply

Your email address will not be published. Required fields are marked *