Here’s my take on AI:
It is positively Darwinian- The WSJ Opinion that inspired this article (by three distinguished fellows, including Henry Kissinger) compared it to the technology that printed the Gutenberg Bible in 1455. I would go a step further and compare it to the meteor that hit Chicxulub in the Yucatan 65 million years ago which left a crater 150 miles wide and changed the civilization of the earth. (This is a great metaphor- the dinosaurs who had ruled the earth became extinct and what we know as birds survived)(i)
It is Darwinian because only those that adapt will survive. Many will become extinct.
Of course, our natural System 1(ii) reaction is to fear that which we don’t know. Images of The Terminator and 50+ years of Doctor Who scare the snot out of us. Dinosaurs in our society and particularly in Government will try to squash it. Too late. It ain’t going away. The cat is out of the bag.
Bottom line for me: It is a blessing of technology which, if handled well, can reposition our relationship with technology with unknowable positive effects.
AI is not human; it will never be. Its “brain” is more powerful than ours—and it isn’t. The parts of our brain which have developed to handle insight, creativity, empathy, etc. have not been duplicated. If you ask ChatGPT, it will tell you the same.
So how do we dummies deal with generative AI and what will be the result? Answer-as always-depends.
One great result of the advent of generative AI is that we have the opportunity to rethink our relationship with technology. We all know people and companies that accept technology as superior to human effort because it is technology. Countless billions has been spent on Canned Vegetable software (Canned vegetable are already cooked; you can’t change them), with the result often ranging from disappointing to disastrous (I knew a CFO who committed suicide after realizing that he had led the way to an expenditure of millions for a software that just didn’t work).
Now we have the opportunity to reassess our relationship with technology, understanding that it can’t totally replace us, then figuring out how and what we can gain from this remarkable advance in technology so it serves us without enslaving us.
The authors of the WSJ Opinion offer some remarkable insights regarding humans (they refer to us as Homo Technicus:
First, will we be able to recognize what the technology can and cannot do? This is critical to produce the result of maximizing the software without minimizing ourselves:
“Will we be able to recognize its biases and flaws for what they are? Can we develop an interrogatory mode capable of questioning the veracity and limitations of a model’s answers, even when we do not know the answers ahead of time?” (iii)
Second, what must we do to create and maintain this relationship?
“It is important that humans develop the confidence and ability to challenge the outputs of AI systems.”
“It is urgent that we develop a sophisticated dialectic that empowers people to challenge the interactivity of generative AI, not merely to justify or explain AI’s answers but to interrogate them.”
We will have to learn new behaviors, and shitcan the automation bias:
“Humans will have to learn new restraint. Problems we pose to an AI system need to be understood at a responsible level of generality and conclusiveness. Strong cultural norms, rather than legal enforcement, will be necessary to contain our societal reliance on machines as arbiters of reality. We will reassert our humanity by ensuring that machines remain objects.”
The problem with this, in my opinion, is that there are far too many selfish and even evil dumbasses in positions of authority, both in business and government. There is a huge risk of pollution of what could be a great leap for mankind. (I am willing to bet that most of us can name names if asked—I can)
One of the areas most directly impacted by generative AI is education. So how do we as educators incorporate this powerful tool into our curricula without squashing human insight and creativity?
The authors offer their recommendation:
“Teachers should teach new skills, including responsible modes of human-machine interlocution. Fundamentally, our educational and professional systems must preserve a vision of humans as moral, psychological and strategic creatures uniquely capable of rendering holistic judgments.”
I agree with this. So far, and I know we are only at the beginning of this journey, what has been successful for me as an instructor (NYU SPS, Division of Programs in Business, Integrated Marketing & Communication) is to encourage students to use ChatGPT. Ask it questions (which I phrase so they all get the same answer), then 1. Assess the validity and completeness of the answer and 2. Critique the response by acknowledging what is workable and what is missing.
Up to now the students’ responses to this method are two: 1. A healthy respect for the capabilities of generative AI and 2. An equally healthy skepticism and caution not to accept the responses at face value. ChatGPT is a team member; it does not replace either the professor or the students’ intellect AND emotion (important AND). Our brains can be a healthy combination of System 1 and System 2 which I believe generative AI will have hard road to duplicate, if it ever can.
Of course, as always we have to be wary of bad actors and incapables advertising themselves as saviors of the world of AI and offering to triple quadruple etc. as has been the case with Ecommerce up to now. Even in a short time, there are countless web sites that promise to help you understand and use AI; my reaction to the ones I checked is, other than take my money, what can you do that I can’t? My question is, if so many people can come out of nowhere to build your Ecommerce business or your SEO shouldn’t it be easy enough for you to do yourself? Same goes for generative AI. With some effort, we dummies can learn what needs to be learned to maximize our relationship with this technological tool.
So the answer is that we who question and adapt AI to our needs are NOT dummies; those that enslave themselves to it OR seek to control how we use it are the dummies—and the enemy.
i New Scientist, “Chickxulub: A Massive Asteroid that hit Earth 65 million years ago” https://www.newscientist.com/definition/chicxulub/#:~:text=asteroid%20the%20size%20of%20a,The%20impact%20was%20devastating.
ii Decision Lab, “System 1 and System 2 Thinking,” https://thedecisionlab.com/reference-guide/philosophy/system-1-and-system-2-thinking
iii Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, WSJ Opinion 2/24/2023, “ChatGPT Heralds an Intellectual Revolution,” https://www.wsj.com/articles/chatgpt-heralds-an-intellectual-revolution-enlightenment-artificial-intelligence-homo-technicus-technology-cognition-morality-philosophy-774331c6
(All following quotes from this source)