|
"the potential benefits in healthcare, design, entertainment, transport, education, and almosteveryareaofcommerceareenormous. however,scientistsandengineersareoften unrealistically optimistic about the outcomes of their work, and the potential for harm is just as great. the following paragraphs highlight five concerns. bias and fairness: if we train a system to predict salary levels for individuals based on historical data, then this system will reproduce historical biases; for example, it will probably predict that women should be paid less than men. several such cases have already become international news stories: an ai system for super-resolving face images made non-white people look more white; a system for generating images produced only pictures of men when asked to synthesize pictures of lawyers. careless application of algorithmicdecision-makingusingaihasthepotentialtoentrenchoraggravateexisting biases. see binns (2018) for further discussion. explainability: deep learning systems make decisions, but we do not usually know exactly how or based on what information. they may contain billions of parameters, and there is no way we can understand how they work based on examination. this has led to the sub-field of explainable ai. one moderately successful area is producing local explanations; we cannot explain the entire system, but we can produce an interpretable descriptionofwhyaparticulardecisionwasmade. however,itremainsunknownwhether itispossibletobuildcomplexdecision-makingsystemsthatarefullytransparenttotheir users or even their creators. see grennan et al. (2022) for further information. weaponizing ai: all significant technologies have been applied directly or indirectly toward war. sadly, violent conflict seems to be an inevitable feature of human behavior. ai is arguably the most powerful technology ever built and will doubtless be deployed extensively in a military context. indeed, this is already happening (heikkilä, 2022). draft: please send errata to [email protected] 1 introduction concentrating power: it is not from a benevolent interest in improving the lot of the human race that the world’s most powerful companies are investing heavily in artifi- cial intelligence. they know that these technologies will allow them to reap enormous profits. like any advanced technology, deep learning is likely to concentrate power in the hands of the few organizations that control it. automating jobs that are currently donebyhumanswillchangetheeconomicenvironmentanddisproportionatelyaffectthe livelihoods of lower-paid workers with fewer skills. optimists argue similar disruptions happened during the industrial revolution and resulted in shorter working hours. the truthisthatwesimplydonotknowwhateffectsthelarge-scaleadoptionofaiwillhave on society (see david, 2015). existential risk: the major existential risks to the human race all result from tech- nology. climate change has been driven by industrialization. nuclear weapons derive from the study of physics. pandemics are more probable and spread faster because in- novations in transport, agriculture, and construction have allowed a larger, denser, and more interconnected population. artificial intelligence brings new existential risks. we should be very cautious about building systems that are more capable and extensible than human beings. in the most optimistic case, it will put vast power in the hands of the owners. in the most pessimistic case, we will be unable to control it or even understand its motives (see tegmark, 2018). this list is far from exhaustive. ai could also enable surveillance, disinformation, violations of privacy, fraud, and manipulation of financial markets, and the energy re- quired to train ai systems contributes to climate change. moreover, these concerns are not speculative; there are already many examples of ethically dubious applications of ai (consult dao, 2021, for a partial list). in addition, the recent history of the inter- net has shown how new technology can cause harm in unexpected ways. the online community of the eighties and early nineties could hardly have predicted the prolifera- tion of fake news, spam, online harassment, fraud, cyberbullying, incel culture, political manipulation, doxxing, online radicalization, and revenge porn. everyone studying or researching (or writing books about) ai should contemplate to what degree scientists are accountable for the uses of their technology. we should consider that capitalism primarily drives the development of ai and that legal advances and deployment for social good are likely to lag significantly behind. we should reflect on whether it’s possible, as scientists and engineers, to control progress in this field" |