As usual, the Wired article is surprisingly low on ethics and specifics. Publications like Wired (and all the junk published to cover the silicon valley) is usually geared to mould public opinion.
Instead of blindly adopting the western notions of privacy and hence AI, we need robust discussions on how the AI is going to affect the socio-cultural mores of the local population. Sadly, the “charitable” organisations or the non-profit systems cultured in the west are non-existent in “rest of the world” and therefore specific cheerleaders to push the agenda would be missing.
I can’t foresee the local organisations willing to invest in resources to investigate the profound effects or enough local expertise to understand the ramifications. The output is far away from the way they design the algorithms.
As authoritarian governments try to compete against pluralistic technologies in the 21st century, they will inevitably face pressures to empower their own citizens to participate in creating technical systems, eroding the grip on power. On the other hand, an AI-driven cold war can only push both sides toward increasing centralization of power in a dysfunctional techno-authoritarian elite that stealthily stifles innovation. To paraphrase Edmund Burke, all that is necessary for the triumph of an AI-driven, automation-based dystopia is that liberal democracy accept it as inevitable.
Dystopia/ authoritarian/ pluralistic/empower/cold-war These are the fanciful terms.
I don’t think it is worth reading your time. As always, I am posting the quick summary here:
- A leading anxiety in both the technology and foreign policy worlds today is China’s purported edge in the artificial intelligence race.
- AI is hungry for more and more data, but the West insists on privacy.
- This is a luxury we cannot afford, it is said, as whichever world power achieves superhuman intelligence via AI first is likely to become dominant.
- A term like “nanotechnology” classifies technologies by referencing an objective measure of scale, while AI only references a subjective measure of tasks that we classify as intelligent.
- For instance, the adornment and “deepfake” transformation of the human face, now common on social media platforms like Snapchat and Instagram, was introduced in a startup sold to Google by one of the authors; such capabilities were called image processing 15 years ago, but are routinely termed AI today.
- There’s always a second way to conceive of any situation in which AI is purported.
This matters, because the AI way of thinking can distract from the responsibility of humans.
- AI might be achieving unprecedented results in diverse fields, including medicine, robotic control, and language/image processing, or a certain way of talking about software might be in play as a way to not fully celebrate the people working together through improving information systems who are achieving those results.
- “AI” might be a threat to the human future, as is often imagined in science fiction, or it might be a way of thinking about technology that makes it harder to design technology so it can be used effectively and responsibly.
- Computation is an essential technology, but the AI way of thinking about it can be murky and dysfunctional.
- You can reject the AI way of thinking for a variety of reasons.
- One is that you view people as having a special place in the world and being the ultimate source of value on which AIs ultimately depend.
- (The pluralist objection.) Regardless of how one sees it, an understanding of AI focused on independence from—rather than interdependence with—humans misses most of the potential for software technology.
- Supporting the philosophy of AI has burdened our economy.
- Conversely, when companies find creative new ways to use networking technologies to enable people to provide services previously done poorly by machines, this gets little attention from investors who believe “AI is the future,” encouraging further automation.
- In fact, as recent reporting has shown, China’s greatest advantage in AI is less surveillance than a vast shadow workforce actively labeling data fed into algorithms.
- Just as was the case with the relative failures of past hidden labor forces, these workers would become more productive if they could learn to understand and improve the information systems they feed into, and were recognized for this work, rather than being erased to maintain the “ignore the man behind the curtain” mirage that AI rests on.