Artificial Intelligence: It’s Complicated And Unsettling, But Inevitable – Forbes
If artificial intelligence came to life today and set up a facebook profile, it would have a relationship with all of us, and our relationship status would be “It’s Complicated”. Although we don’t see the motivation or reasoning behind the algorithms we spend so much time with, they are doing great things for us, and so we love them. Unfortunately, we don’t necessarily understand them or how they will change us. By we, I don’t just mean the general public. I mean software developers, their managers, and even machine learning engineers. AI does some things well and some things poorly, but on balance, the benefits exceed the costs of having an algorithm making decisions.
Just like any relationship, it’s about accepting your partner’s flaws just as much as their charms, and understanding how both can shape your future together. Thought leaders are having a pretty public debate about where this technology is going, and where the relationship may sour. In one specific recent example, Jack Ma and Elon Musk recently discussed the future of humanity in a world run on artificial intelligence. The general viewpoint from Ma was upbeat, while Musk urged caution. These two points of view both have merit. AI is delivering a lot of value, and could also cause a lot of problems. On the positive side, automation with AI is already changing the way we work and play. However, on the cautionary side, imperfect AI systems will not only annoy us, they could bring s to question the technology itself. Simplifying artificial intelligence as Musk and Ma have, casting “AI = good” vs “AI = bad” misses the point that “AI = now”. AI is already here and it will keep getting better. Just so you understand, regular non-AI software does what you tell it to do, whereas AI looks at a set of data to figure out what to do based on the data.
AI algorithms have their flaws but so do things we like, such as driving, and we still accept them. As we embrace the bots (AI algorithms), we shouldn’t hide from the flaws in how things work today. Driving cars causes car crashes, but we love driving. It’s worth the risk. There are many problems in the AI field that need work, including the interpretability of AI models, better tools for dataset handling, more automated machine learning, and avoiding bias. And yet the benefits massively outweigh the risks. We use artificial intelligence in many aspects of our lives, from spam filters to movie recommendations.
What are the signs the evolution of artificial intelligence will follow a positive path of widely adopted technology, rather than concentrating within a few large companies? The artificial intelligence community is making a concerted effort to supply tools and education to regular software developers so that more and more business processes can integrate artificial intelligence. For example, there is probably code on GitHub for achieving common artificial intelligence tasks, licensed for free commercial use. I’m not suggesting that these models are production ready, but it’s good to have this development out in the open. The major cloud providers such as Amazon, Google, Microsoft, and IBM have made machine learning resources easy to access, and they also support integration partner networks for custom AI development. This is a really good situation, but there is room for improvement.
AI is a complicated set of technologies, and the way these technologies are used to build products is still evolving. It is well known in the AI industry that there are flaws in the way machine learning models are shared between researchers and eventually used to deliver projects in industry. Specifically, AI models can be fooled with carefully designed adversarial examples (the model sees something when it should see nothing or vice versa), backdoors (the model does the main task but also some other hidden task), and privacy leaks (the model can be reversed to reveal the private training data, or simply bypasses encryption). These problems can apply to everything from self-driving cars to doorbells.
I feel that today the world is using many artificial intelligence capabilities by following a strategy known in the software security world as “security through obscurity”. It is a bad idea to assume that tinkerers won’t eventually figure out how to “hack” flawed machine learning models, especially those that they can buy and take apart. And so it makes sense to assume that academics, hackers, and regular Joe will try breaking into places that developers did not want them to go, including analyzing AI models with randomly generated data to see what happens. Trying random data is called fuzzing, and in computer hardware, it has revealed a series of CPU instructions that were not meant to be in the public domain. Similar hacks are going to spring up against AI systems, and like regular software, I predict that AI bugs and data breaches are going to be in the news.
Reverse engineering a doorbell does not seem problematic, but reverse engineering stock trading algorithms would be bad. Simply knowing that someone else uses a particular AI model gives you an advantage in how you operate. Even knowing that a certain signal triggers most AI models to make a mistake is an opening to do naughty things.
NEWTON, MA – JUNE 26: Massimo Ottani, owner of The Landing – L’approdo Pizza Kitchen in Newton, MA, cuts a Margherita Pizza for a customer on June 26, 2019. (Photo by Nathan Klima for The Boston Globe via Getty Images)
Boston Globe via Getty Images
For example, Alice and Bob are sitting in Eve’s pizza shop. Bob uses a free and widely available AI model called “wants pizza”, to predict that Alice is hungry and wants to buy the last slice of pizza for $1 in 2 minutes. Bob also has a crush on Alice and so he doesn’t want to buy the last slice of pizza. Bob is faced with a hunger for pizza, and a fear of rejection, and so he pulls out some cash and is ready to pay $10 for a new pie of Eve’s pizza. Eve knows all about the poor quality “wants pizza” AI model that Bob is using, and she has studied it well. Eve has a copy of the “wants pizza” AI model, and so she knows what it predicted for Bob. Eve also has a better AI model and predicts that Alice will instead get up to use the bathroom, costing Eve $9 in lost revenue when Bob decides to simply buy the last slice for himself. Eve now has an excellent picture of what Alice and Bob think and can take advantage of the pizza market by quickly taking Bob’s $10 pizza order before Alice gets up to ask for the key to the restroom. Obviously, this example of information asymmetry applies to more than the pizza market. Trading outfits can take advantage of dark pools to mask their intentions from an order book, and there are many other strategies to avoid telegraphing in advance what orders will be placed. However, regardless of the cat and mouse games to try and hide information from other parties, the main idea I’m trying to convey with this story about Alice, Bob, and Eve is that our software ecosystem today is good at detecting bugs in regular software, when we care about it, but not as good at detecting machine learning bugs, or even knowing about them. Our complicated relationship with models that make predictions will be an important issue even in the very short term. This is an area for improvement.
The general rule in artificial intelligence is that data is power, especially proprietary data. However, keep in mind that AI models are trained on data, and common practice of using pre-trained models means that, even if the source data is not disclosed, the behavior of a disclosed model could be extracted and turned inside out. The most vulnerable AI models are the pre-trained off-the-shelf kind that is widely used but not necessarily widely audited.
Identifying unintentional flaws (bugs) is how Eve keeps one step ahead in the pizza game. There are many ways that these bugs can find their way into AI models. For example, squishing a model down to a smaller size so that it fits onto a phone can cause errors (quantization error), but careful testing can avoid this sort of problem. As with regular software, artificial intelligence can be designed to be really robust, but it takes extra effort that most developers and companies will not invest.
When you tell an AI-based assistant to set a meeting and it uses the wrong timezone, the two common reactions are “the software is broken” or “I used the software wrong”. Interacting with a human personal assistant, you develop a relationship and develop expectations for how certain tasks will be carried out. As we increasingly rely on AI-based assistants, how do our expectations, and our sense of self, begin to change?
And so it seems that problems abound. Why don’t software developers at companies simply fix the problems? Well, many of these problems are not solved yet, and often the issues with machine learning models are introduced by a programmer who has never spoken with the person who wrote the code for the task they are running, who also never met the person who collected the raw data, and so on. As soon as you unwrap the gift basket of machine learning, you will find that complicated math is running under the hood. AI developers need to check that their assumptions are consistent with the mathematics of the software library developers.
Artificial intelligence, as imperfect as it is, has huge potential to continue to improve our lives, and with any disruptive technology, there will be issues. I want to give you a realistic view of where artificial intelligence has been adding serious economic productivity gains, and where it’s just not ready for primetime. I don’t want to set off the alarm bells that everything is terrible. It isn’t. But in this same vein, we need to acknowledge that as a society we are always playing with stuff while it is still being invented, and eating raw cookie dough because we don’t want to wait for the cookies to bake.
Given that there are these open problems in artificial intelligence that people are working on right now, I foresee artificial intelligence adoption following a similar model to WiFi technology. We upgraded the technology for WiFi several times since the 1990s – remember WEP, WPA, and WPA2 – and even though it was annoying, we all just rolled with the punches because WiFi did (and continues to do) great things for us.
Moving forward, let’s assume these problems get ironed out over time. As artificial intelligence grows up and integrates ever more deeply into our everyday lives, how will AI affect the modern workforce? Let’s take a look in the rearview mirror to think about what the past can tell us about where artificial intelligence may take us, and where it really will take a wrecking ball to jobs held by humans today.
Anxiety is a strong motivator, and office workers I talk with feel that automation is coming for their jobs. They don’t realize that all the problems mentioned in the first half of this article are limiting the broad adoption of AI. Until that last traffic light comes down in America, we can’t have autonomous cars whizzing through intersections without stopping, unlocking the greater efficiency you see when a crowd of humans walks past each other in a courtyard. It’s that locked in legacy process of having a system in place that makes radical change slow and costly. And so I’m not worried about, for example, all programmers losing their jobs to AI, as we have been trying to put ourselves out of a job for many years with automated tools, and the sector just keeps growing. But what about office workers, accountants, marketers, and all the rest of the citizens of the cubicle maze?
The talent economy, in my view, is still strong on the supply and demand sides of the global high-tech economy. There are reasons to be optimistic about the near future. There is a talent shortage among artificial intelligence developers, and even among the broader economy employment is historically quite good, even with underemployment factored in.
So, if the optimistic Jack Ma picture is right in the short term, when will the “artificial intelligence is good” party be over? Should humans be looking down in fear at their computerized replacements? I don’t think so. The warnings from Musk should encourage us as a society to have a lively debate, not to hide under our desks in the fetal position.
SHANGHAI, CHINA – AUGUST 29: Alibaba Group Chairman Jack Ma (L) and Tesla Co-founder/CEO Elon Musk attend the opening ceremony of 2019 World Artificial Intelligence Conference (WAIC) at Shanghai Expo Center on August 29, 2019 in Shanghai, China. With
VCG via Getty Images
There is a secular trend toward the adoption of AI. It is in a growth mode, driven in part by technology and in part by the general economy. The growth rate of adoption may be fast or slow, but it won’t be negative. Although AI, a term coined in 1956, was based on research in the 1930s and 1940s on neural networks, the majority of techniques in practical applications of AI are still quite new. The end of the last artificial intelligence winter coincided with the beginning of the longest period of stable economic growth in US history. Therefore, the resurgent field of AI has not been battle-tested by a recession as older fields have, such as the dotcom bubble and real-estate crises. Furthermore, the artificial intelligence field, like many emerging technologies in their time, has been plagued by hype, overpromise, and misinformation. Problems with AI are still easy to point out. Problems like AI interpretability, dataset handling tools, automated machine learning, bias, adversarial examples, backdoors, privacy leaks, legacy systems, and so many other factors are holding back AI adoption like a wall of inertia. And so I don’t see AI racing to empty office buildings of their human workers. Does that mean the AI field is cooling off? In the short term, I don’t see a new AI winter as likely. Recession is a scary word, and the average person still has a visceral reaction to the pain of the 2008 recession and the economic reset that followed. But no two recessions are alike. A recession in the overall economy does not necessarily mean a recession in the artificial intelligence field or even in the high-tech field.
Even if several signs point to an impending recession (e.g. yield curve inversion), it is important to view corrections in the economy as a controlled burn to keep the core economy healthy, rather than an uncontrolled burn to level the economic neighborhood. In a recession, which hopefully doesn’t materialize, people and corporations look for efficiency, and artificial intelligence is all about efficiency. Have a look at a long term chart for a broad index like the Russell 3000 (represents about 98% of all public US equity), which moved from around 200 in the early 1990s to the 1700 level in 2019. Even adjusted for inflation, that’s really broad-based growth. The story behind this rise in the US economy has a lot more to do with automation in general than it has to do with something specific like artificial intelligence. Spreadsheets changed the way business is conducted by humans in offices, and that story started way before artificial intelligence shot to prominence over the past decade. I do feel that artificial intelligence is moving into the driver’s seat of the automation school bus. Literally. If you drive a truck, automation is coming after your job, plain and simple. It’s not paranoia if they really are out to get you.
Other jobs in the economy will not be automated in my lifetime, simply because we don’t want robots watching our babies or cutting our hair. And AI, in my view, is more likely to improve human productivity like the spreadsheet did for office workers, rather than replace the humans with complete automation.
And so I take a middle of the road view on how seriously to take the problems with AI and how the technology is being adopted. There are plenty of technical challenges to address right in the here and now. We have not solved cancer yet. We don’t have fusion reactors yet. There is a lot of work to do as we humans finish inventing artificial intelligence as a field, and the hope – my hope – is that progress in artificial intelligence will propel us forward in many key areas where humanity could really use a helping hand.
- Is AI good or bad? On the whole, artificial intelligence can be a force for good, although it still needs work, because we already have such a complicated relationship with our tech
- Is the AI field cooling off? Progress probably won’t slow down
- Is AI a huge job killer? No more than the humble spreadsheet was. In the same way that spreadsheets provided a jolt of productivity to a wide range of jobs, today’s artificial intelligence advances will have a broad impact on many job categories, but will only apply deep damage to a few select areas where humans can be fully replaced, like driving
- Should you be integrating AI into your plans and processes? Yes