- Regulatory Overlap Reform and Federalism - July 30, 2019
- Manufacturing Benefits from Trump’s Deregulation Agenda - February 13, 2019
- The Inevitability of E-Cigarette Regulation - November 7, 2018
The topic of “artificial intelligence” has recently brought a confluence of nationally significant announcements. In September, Stanford University released its One Hundred Year Study on Artificial Intelligence, which was quickly followed by the announcement in early October that five firms — Amazon, DeepMind of Google, Facebook, IBM, and Microsoft — have formed a nonprofit named the Partnership on Artificial Intelligence to Benefit People and Society (Partnership on AI). A week after the Partnership on AI announced its formation, the National Science and Technology Council (NSTC), which is overseen by the Executive Office of the President, released Preparing for the Future of Artificial Intelligence.
Is this a coincidence? For the release of the Stanford and the NSTC reports, perhaps, but the formation of the Partnership on AI is no coincidence.
The members of the Partnership on AI realize the marketplace is at an important “tipping point” when it comes to the increasing utilization of AI in the U.S. AI is already used in automobiles to enable enhanced driving safety features and GPS services, in smartphone apps, and in wearable medical device — to name just a few examples. In December 2013, Gartner, Inc., a leading information technology research and management advisory company, released a study that forecasted that the “Internet of Things” (IOT), which includes a wide range of wirelessly connected devices, will reach 26 billion units in the marketplace by 2020, a 30-fold increase from the 0.9 billion circulating in 2009. Moreover, Gartner estimated IOT product and service suppliers will generate incremental revenue of over $300 billion by 2020, with most of this revenue coming from services.
Given this spectacular revenue forecast, the Partnership on AI, consisting of major information technology (IT) service providers, is interested in establishing a social and public policy environment supportive of a lucrative future of ever-expanding AI-enabled consumer devices. To that end, the industry nonprofit announced it intends to conduct research, recommend best industry practices, and publish research under an open license in areas such as ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the AI technology. Interestingly, the Partnership on AI has no intention of lobbying government or other policymaking bodies, leaving this important political activity to individual member firms and through other industry coalitions.
In an interview with the Hill earlier this month, Eric Horvath, managing director at Microsoft Research and interim co-chair of the Partnership, noted, “There’s been a sense that there’s anxiety about AI that is misplaced and that such poor understandings can lead to poor decisions when it comes to thinking through the possibilities, and in fact some of the great possibilities, for AI ahead.” Some of that anxiety about AI is justified. The Stanford University study predicts AI may negatively impact employment in specific industries, such as transportation, where it could reduce the demand for truck drivers.
However, AI is also likely to create new job prospects in other industries. The NSTC report recommends the White House convene a study on automation and the economy to get a better sense of AI’s potential future impact on U.S. employment. If this recommendation is accepted, the follow-up report would be released to the public by the end of 2016.
In their report, the Stanford researchers cautioned, “As a society, we are now at a crucial juncture in determining how to deploy AI-based technologies in ways that promote, not hinder, democratic values such as freedom, equality, and transparency.” The Obama administration argues it is imperative for industry, civil society, and government to work together to develop the positive aspects of AI technology, manage its risks and challenges, and open opportunities up for all Americans to build an AI-enhanced society and participate in its benefits.
Security questions related to AI have recently been elevated to a new level. In early October, Johnson & Johnson warned patients it had learned of a security vulnerability in one of its insulin pumps that a computer hacker could potentially exploit to overdose diabetic patients with insulin, though it describes the risk as low (the range for effective hacking must be within 25 feet of the person). Medical device experts believed it was the first time a manufacturer had issued such a warning to patients about a cyber vulnerability, a hot topic in the medical equipment industry since revelations emerged in August concerning possible cyber vulnerabilities in pacemakers and defibrillators manufactured by St. Jude Medical.
While public anxieties brought about by fears of machines taking over society or wreaking widespread havoc may be considered science fiction, the ability to hack into a person’s wearable insulin pump and adjust the level of insulin or make unauthorized adjustments to a pacemaker are technologically feasible and could lead to a person’s death.
Proponents of AI, as with any emerging technology, need to acknowledge and address AI’s identifiable weaknesses early on. Finding workable solutions for these challenges will accelerate and shore up the likelihood of successful consumer acceptance. The long-term societal benefits of commercialization of AI in education, energy conservation, environmental protection, and health care are indeed worth the commitment to fairness, inclusiveness, and transparency the Partnership on AI publicly supports as part of its efforts to help facilitate AI’s acceptance in the United States
[Originally Published at American Spectator]