PART 2.5: A Brief Detour

I hope everyone had a wonderful Thanksgiving holiday, and that the time off work proved sufficient to evaluate that which we have already covered in “What is AI?”

But perhaps you didn’t spend your holiday in heated, familial debate about AI and philosophical zombies, so here’s a prompt recap:

First, we established that Artificial Intelligence isn’t exactly intelligent, and that the parallels drawn between AI and the human brain serve more as an explanatory analogy than anything else.

Next, we explored the world of the philosophical zombie and confirmed that not only is AI unintelligent, but that it is entirely incapable of experiencing human consciousness.

Finally, we parted with a promise to explore the perils of anthropomorphizing AI. And we will certainly do so, but I thought it best that we first take a (very) brief detour.

To begin, I’d like you to reflect on your average workday.

How many meetings would you miss without the subtle reminder realized by the buzz of your smartphone? Would you really remember your coworkers birthday if Facebook didn’t just notify you? Could you remember the name of that sandwich truck -- you know, the one you visited a few months ago and are suddenly and unexpectedly craving -- without Google or Yelp?

I’m sure you’ve already surmised where I’m headed with this line of inquiry, but bear with me.

There is no denying that we rely on our smartphones, and technology in general, throughout the day. But many an individual argue that interaction has rendered us lazy. That, because we often consult this other technological entity, we are allowing our brains to lapse into an insurmountable doldrum.

Furthermore, that argument is often used to dissuade those seeking to leverage technological advancement in the realm of the day-to-day. 

I wholeheartedly disagree with the 'technology induces passivity' argument, and I am confident that the use of technology in our day-to-day lives is where artificial intelligence will ultimately prove most useful in the near future.

Indeed, my reliance on technology is no different than my parents’ reliance on the typewriter, nor is it different than my ancestors’ reliance on the sundial. By delegating some cognitive tasks to technology, the result is a sort of interconnectivity between myself and my existence and the physical world.

That doesn’t mean I am no longer involved in the transaction of information. I am required to utilize and even build these tools. They are an extension of me, instantiated by me. They are me. And there’s nothing lazy about that -- in fact, it is in part our species’ propensity for leveraging tools that has allowed us to advance so rapidly.

As philosophers Andy Clark and David Chalmers write in “The Extended Mind” (1998), “Cognitive processes ain't (all) in the head!” As technology advances, we are better able to utilize it to develop our cognitive function outside of skin and skull (to borrow their terminology).

It’s all somewhat Heideggerian in nature, but I am steadfast in my belief.  

So, as we approach 2018, artificial intelligence is set to become our latest partner in the long history of human-machine interdependence, and we must embrace that collaboration.

Do not fear human-AI fusion, because it is inevitable -- and that's a good thing. 

 

 

*Please note that this blog series is independent of my work at the Wall Street Journal and the aforementioned views do not reflect those of my employer.

Alex SiegmanComment