Where will artificial intelligence lead us? Now is the time to ask (Commentary)


Patricia Hirl Longstaff is Senior Research Fellow at the Moynihan Institute of Global Affairs, in the Maxwell School of Citizenship and Public Affairs at Syracuse University.

By Patricia Hirl Longstaff | Special to Syracuse.com

Artificial Intelligence is here. This is not science fiction. It is coming to jobs and how we deal with machines. AI is a big research effort at Google, Amazon, the U.S. government and universities. Microsoft just invested $1 billion in a company (Open AI) that is working on artificial “general” intelligence that will enable a machine to be capable of learning and operating just like a human. Maybe better.

The robots that clean your floor is only the beginning. AI will educate children, help senior citizens, drive our cars and trucks, and maybe replace soldiers in war. And no, the AI’s are not going to take over the world any time soon. The AI future will bring some good things and some bad things. Maybe it is time we all think about where this is leading us.

Or we can just see what happens. We can let the internet and social media play out and thinking we would fix later what needs to be fixed – except that they got too big to be fixed. For the foreseeable future, we will just have to live with fraud, cyber security and attacks that endanger the very basis of our democracy. The systems are now so enormous that the international collectors and disseminators of data have no way to police them. Now, even Facebook founder Mark Zuckerberg is looking for government regulation. Really. And Silicon Valley is also interested in other AI questions.

For example, AI will replace jobs. No debate about this. And not just educators, bricklayers, truck drivers and (yes) lawyers. Self-driving trucks will put millions of people out of work. It has already started and most of it will happen by the mid-2040s. The data we are putting into computers will let an AI “learn” what these jobs do, how to do them better and without all the messiness of people on the payroll. What will happen to all those people?

Will they all be trained to code for computers? Some of these people will focus on the non-technical parts of their jobs, but what happens to the people who did the technical work? Will they just stay home and get minimum salary from the government? AI could create an “abundance economy” – a utopia where our basic needs are essentially free. What effect would that have on our culture, politics and markets? The good news is that jobs are actually being discussed. Both in the U.S. and around the world. How would you like this to turn out? Once again, this is not fiction. The impact on you, your children and grandchildren is up for debate.

Another area that has people all over the world talking is the question of “autonomous weapons.” People are trying to understand how a robot armed with lethal weapons can make a “kill” decision without any human input. How can a weapon do that with the enemy “spoofing” it by sending code that will make a private car look like a tank? In the U.S., the Department of Defense cannot make such a weapon because of our law. But potential enemies do not have such a law.

The companies that are developing AI systems are some of the biggest in the world, with tentacles in virtually every nation. Who will regulate them? Who will understand the technology enough to propose workable regulations? Yes, this is not going to be easy, but it is being discussed.

If each country regulates AI, how does that effect the instantaneous transmission of data round the world via satellites? What if one country says to its entrepreneurs to “slow down” until we get some things sorted out; that country will be left behind. The two most powerful countries in AI technology are the U.S. and China and neither wants to be left behind.

One important part of the AI debate is privacy. Even Google has advocated some sort of worldwide privacy regulation. Some sort of planet-wide regulatory “convergence” will allow companies to stay on the legal side of privacy. The European Union and the U.S. (sort of) have more privacy regulation, but countries like China and developing countries have very little. For example, can an AI use facial recognition to see if you were at a political demonstration? Facial recognition systems are being discussed because an AI cannot reliably identify non-white faces.

Nobody should expect these debates to be easy. The Google experiment with an Ethics Panel for AI was canceled after there was no agreement on who should be on it and what they should agree to. The alternative is letting AI hit us all before there is anything we can do about it. Just like Web 2.0 did.

So where do you fit into the discussions on AI? You can make sure that your elected officials know about these controversies. Try writing them (all of them) about where you come down on the loss of jobs and what we should do about it. See if they know what these debates are about. You can make the 2020 election about how people respond to technology.

Related: Capitol Watch: New York to take on artificial intelligence

Where will artificial intelligence lead us? Now is the time to ask (Commentary)