With the release of generative AI tools, the hype has approached a fever pitch. Reactions span the spectrum. Calls for regulation of AI, investments in AI startups, and marketing that must include AI are examples of those who think AI upends everything. At the other end of the spectrum, there are a few people (very few) who look at AI as nothing special. My perspective is that AI is part of the evolution of IT, but not revolutionary. I’m happy to be proved wrong, but for now, I’ll gladly wear the label AI curmudgeon. In addition, AI is an umbrella term for dozens of programming techniques. Expert systems, computer vision, machine learning, and many others are currently considered AI. So which AI is going to flip the world on its head?
A Little History
In the 1950s the term artificial intelligence was coined. Alan Turing developed the famous “Turing Test.” Passing the test required a computer’s responses to be indistinguishable from a person’s. Researchers were optimistic that computers would be passing the test in the next decade or so. I think we are finally there. An overnight success – seventy years in the making.
A lot of early AI work was based on encoding human knowledge into machines. Anyone remember when being a “knowledge engineer” was the coolest new job? There were some successes in areas that we would call narrow AI. Machines that performed well in very specific domains. MYCIN was an example of an expert system that successfully diagnosed bacterial infections in the 1970s. However, the difficulty of encoding knowledge hampered progress, and funding dried up.
The idea of learning machines first emerged in the 1950s, but it wasn’t until the 80s when backpropagation was invented that neural networks became popular. Neural networks require a lot of computing power that wasn’t always available, but we could see the potential. I wrote my first machine-learning program on a spreadsheet!
The 90s brought more computing power for less, throw in some key inventions like genetic algorithms and we could see the future. IBM’s Deep Blue beat the world chess champion in 1997. However, translating that success in a narrow application with well-defined parameters to more general problems proved frustratingly elusive.
With the new century, we saw AI enter everyday life. Once again, sheer computing power, large data sets, and refinements in machine learning techniques brought us Siri, Alexa, and chatbots. Other uses included fraud detection, medical diagnosis, and robotics. Deep learning catapulted computer vision, language processing, and autonomous vehicles. Speaking of autonomous vehicles, I remember executives from a major US auto company saying we were five years away from commercial self-driving cars – in 2015. I think the point here is that the last 10% is much harder than we think.
A lot of people are afraid of AI. Since I consider AI an evolutionary step in the progression of information technology, I don’t consider AI more of a problem than what we already do and have done to ourselves with technology. There are a lot of concerns, but the approaches we’ve already developed are appropriate to help us deal with the challenges AI presents.
A Sampling of Concerns
Loss of human control. There is a concern that we don’t know why an AI system is making the decisions that it is. As a programmer, I have spent far too much time trying to figure out why a conventional program was doing what it did. I don’t know how much of the software I use does what it does. But as I use it I become more familiar with what it does with the inputs I give it. Our students here at NYU are very good and getting better at understanding how to get the outputs they want from generative AI systems. This is a skill we need and will continue to develop.
Bias and discrimination. AI systems are trained on data that is created by humans. This means that AI systems can inherit the biases that exist in the data. This could lead to AI systems making decisions that discriminate against certain groups of people. We get into trouble when we don’t thoroughly test any computer systems. The quality of the output is something that we have to understand both from the creator’s perspective, but also from the user’s perspective. When we implement enterprise software, we test it in our environment. Does it fit our processes? If not, we customize it. The skills and techniques are different but the concept is the same. Try to procure the system that best meets requirements, test and modify based on those tests, and train the users. The only thing different is the skills that we need. We have been changing the skills we need for both IT professionals and users since computers were introduced. We have to learn how to use these systems and we have to know what to expect. The more things change, the more they remain the same.
Security risks. When I first started diving into cybersecurity in the late 90s, I wondered whether this was a smart career move. Surely, we would soon eliminate all these vulnerabilities, reducing the importance of the field. Remembering human nature, I quickly banished the thought. Yes, the techniques will be different, but what is cybersecurity if not ever-changing? AI introduces risks, but what technology doesn’t?
Job displacement. Not unique to AI. Technology has always displaced people. The pace of technology evolution is accelerating, the question is how fast can people and society adapt.
Lack of empathy and creativity. Some fear that AI, while capable of performing tasks, lacks true emotional understanding and creativity, which are distinctively human traits. I see this as a feature and not a bug.
Existential threat. Some people believe that AI could pose an existential threat to humanity. This is the fear that AI could become so intelligent that it surpasses human intelligence and decides to eliminate us. AI is a machine that we build. We don’t understand human consciousness right now. Look at how long it took us to develop a program that mimics human communication, and we understand language really well. Sentient machines are fun sci-fi. Programs that do a good job of writing sentences are a long way from consciousness.
AI and generative AI certainly have a lot of potential. We have many useful tools built on various forms of AI. Google’s autocorrect has helped me write this faster and more clearly. We will continue to find new applications for existing techniques as well as continue to improve and create new ones.
How to think about AI
My career has been in enterprise IT. My focus has been supporting the larger mission, not developing new technology. I think about AI the following way and I’m encouraging others to think the same way.
- Work with users to see how to take advantage of these new capabilities. Non-technical people bring a different perspective and deep understanding of what they do. IT folks bring their own perspectives on the opportunities and especially the challenges in operating at scale.
- Don’t just say no. New technologies are inherently risky. We have to be good at living with risk in the right areas and mitigate those risks appropriately. With innovation and new technologies, people will bring them into the organization whether we like it or not. Better that we help than let things get completely out of control.
- Trust the controls that have served us well. Most of what we already do to manage technology will work with AI. We will have to look a little differently and be flexible but the principles are the same. We’ve always had prohibitions against releasing proprietary information into public forums. People know not to put proprietary information into public blogs or social media. Public LLMs could be thought of as blogs except where a person reads a blog to formulate an answer the LLM does.
- Anticipate what needs to change. Technology, training, and processes may need to be a little different. Don’t be surprised by a failure due to trying to use the old way.
- Keep investing in your team. Technology changes quickly and this is nothing new. We have always needed to make sure our people get the training they need. Making sure your team can meet your organization’s needs has always been a key to success.
AI brings great potential to our enterprises. Moving from great potential to real productivity is a longer journey than we would like. Remember the “IT Productivity Paradox.” It took decades for IT investments to pay off in real productivity gains. AI has been with us for almost as long as IT has. We need to not get left behind while we don’t want to waste valuable resources chasing hype. The principles that have made us successful with each new technology will serve us well with AI.
More articles from the author:
Why Cybersecurity Strategy is Important
Tracing the Remarkable Journey of an Army Officer turned CIO in a Prestigious US University