This is a second post in the series. (First part here). Mozilla has jumped in the fray and they have an interesting document to share on how the AI could be made more trustworthy. I have been using Firefox as an open source project. While I disagree with their business direction, I still feel that there is an important place for open dialogue from organisations like Mozilla who provide a counterweight argument to the big tech.
Here’s a quick synopsis:
At the heart of the paper are eight big challenges the world is facing when it comes to the use of AI in the consumer internet technologies we all use everyday. These are things like: bias; privacy; transparency; security; and the centralization of AI power in the hands of a few big tech companies. The paper also outlines four opportunities to meet these challenges. These opportunities centre around the idea that there are developers, investors, policy makers and a broad public that want to make sure AI works differently — and to our benefit. Together, we have a chance to write code, process data, create laws and choose technologies that send us in a good direction.
I believe that this is a more inclusive approach than “governance issues” outlined in the previous post.
[embeddoc url=”https://assets.mofoprod.net/network/documents/Mozilla-Trustworthy_AI.pdf” viewer=”google”]The quick summary:
- If we want a healthy internet and a healthy digital society, we need to ensure that our technologies are trustworthy. Since 2019, Mozilla Foundation has focused a significant portion of its internet health movement-building programs on AI. Building on our existing work, this white paper provides an analysis of the current AI landscape and offers up potential solutions for exploration and collaboration.
- AI has immense potential to improve our quality of life. But integrating AI into the platforms and products we use everyday can equally compromise our security, safety, and privacy. Our research finds that the way AI is currently developed poses distinct challenges to human well-being. Unless critical steps are taken to make these systems more trustworthy, AI runs the risk of deepening existing inequalities. Key challenges include:
- ● Monopoly and centralization: Only a handful of tech giants have the resources to build AI, stifling innovation and competition.
- ● Data privacy and governance: AI is often developed through the invasive collecting, storing, and sharing of people’s data.
- ● Bias and discrimination: AI relies on computational models, data, and frameworks that reflect existing bias, often resulting in biased or discriminatory outcomes, with outsized impact on marginalized communities.
- ● Accountability and transparency: Many companies don’t provide transparency into how their AI systems work, impairing mechanisms for accountability.
- ● Industry norms: Because companies build and deploy rapidly, AI systems are embedded with values and assumptions that are not questioned in the product development lifecycle.
- ● Exploitation of workers and the environment: Vast amounts of computing power and human labor are used to build AI, and yet these systems remain largely invisible and are regularly exploited. The tech workers who perform the invisible maintenance of AI are particularly vulnerable to exploitation and overwork. The climate crisis is being accelerated by AI, which intensifies energy consumption and speeds up the extraction of natural resources.
- ● Safety and security: Bad actors may be able to carry out increasingly sophisticated attacks by exploiting AI systems.
- Several guiding principles for AI emerged in this research, including agency, accountability, privacy, fairness, and safety. Based on this analysis, Mozilla developed a theory of change for foundation.mozilla.org supporting more trustworthy AI. This theory describes the solutions and changes we believe should be explored.
- While these challenges are daunting, we can imagine a world where AI is more trustworthy: AI-driven products and services are designed with human agency and accountability from the beginning. In order to make this shift, we believe industry, civil society, and governments need to work together to make four things happen: 1. A shift in industry norms
- Many of the people building AI are seeking new ways to be responsible and accountable when developing the products and services we use everyday. We need to encourage more builders to take this approach — and ensure they have the resources and support they need at every stage in the product research, development, and deployment pipeline. We’ll know we are making progress when: 1.1. Best practices emerge in key areas of trustworthy AI, driving changes to industry norms.
- 1.2. The people building AI are trained to think more critically about their work and they are in high demand in the industry.
- 1.3. Diverse stakeholders are meaningfully involved in designing and building AI. 1.4. There is increased investment in trustworthy AI products and services.
- There are a number of ways that Mozilla is already working on these issues. We’re supporting the development of undergraduate curricula on ethics in tech with computer science professors at 17 universities across the US. We’re actively looking for partners to scale this work in Europe and Africa, and seeking ways to work with a broader set of AI practitioners in the industry.
2. New tech and products are built
- To move toward trustworthy AI, we will need to see everyday internet products and services come to market that have features like stronger privacy, meaningful transparency, and better user controls. In order to get there, we need to build new trustworthy AI tools and technologies and create new business models and incentives. We’ll know we are making progress when: 2.1. New technologies and data governance models are developed to serve as building blocks for more trustworthy AI.
- 2.2. Transparency is a feature of many AI-powered products and services. 2.3. Entrepreneurs and investors support alternative business models. 2.4. Artists and journalists help people critique and imagine trustworthy AI.
- As a first step towards action in this area, Mozilla will invest significantly in the development of new approaches to data governance. This includes an initiative to network and fund people around the world who are building working product and service prototypes using collective data governance models like data trusts and data co-ops. It also includes our own efforts to create useful AI building blocks that can be used and improved by anyone, starting with our own open source text-to-speech efforts such as DeepSpeech8 and Common Voice data commons.9
3. Consumer demand rises
- People can play a critical role in pressuring companies that make everyday products like search engines, banking algorithms, social networks, and e-commerce sites to develop their AI differently. We’ll know we are making progress when: 3.1. Trustworthy AI products emerge to serve new markets and demographics. 3.2. Consumers are empowered to think more critically about which products and services they use. 3.3. Citizens pressure and hold companies accountable for their AI. 3.4. Civil society groups are addressing AI in their work.
- Mobilizing consumers is an area where Mozilla believes that it can make a significant difference. This includes providing people with information they can use everyday to question and assess tech products, as we have done with our annual *Privacy Not Included Guide.1 0 It also includes organizing people who want to push companies to change their products and services, building on campaigns we’ve run around Facebook, YouTube, Amazon, Venmo, Zoom, and others over recent years. These awareness and pressure campaigns aim to meet people where they are as internet users and citizens, giving them even-handed, technically accurate advice. Our hope is that this kind of input will encourage tech companies to develop products that empower and respect people, building new levels of trust.
4. Effective regulations and incentives are created
- Consumer demand alone will not shift market incentives significantly enough to produce tech that fully respects the needs of individuals and society. New laws may need to be created and existing laws enforced to make the AI ecosystem more trustworthy. To improve the trustworthy AI landscape, we will need policymakers to adopt a clear, socially and technically grounded vision for regulating and governing AI. We’ll know we are making progress when: 4.1. Governments develop the vision and capacity to effectively regulate AI. 4.2. There is wider enforcement of existing laws like the GDPR. 4.3. Regulators have access to the data they need to scrutinize AI.
- 4.4. Governments develop programs to invest in and procure trustworthy AI. Mozilla has a long history of working with governments to come up with pragmatic, technically informed policy approaches on issues ranging from net neutrality to data protection. We also work with organizations interested in advancing healthy internet policy through fellowships and collaborative campaigns. We will continue to develop this approach around the issues described in this paper, such as encouraging major platforms to open up their data and documentation to researchers and governments studying how large-scale AI is impacting society. Europe and Africa will be our priority regions for this work.
- Developing a trustworthy AI ecosystem will require a major shift in the norms that underpin our current computing environment and society. The changes we want to see are ambitious, but they are possible. We saw it happen 15 years ago as the world shifted from a single desktop computing platform to the open platform that is the web. There are signs that it is already starting to happen again. Online privacy has evolved from a niche issue to one routinely in the news. Landmark data protection legislation has passed in Europe, California, and elsewhere around the world, and people are increasingly demanding that companies treat them — and their data — with more care and respect. All of these trends bode well for the kind of shift that we believe needs to happen. The best way to make this happen is to work like a movement: collaborating with citizens, companies, technologists, governments, and organizations around the world. With a focused, movement-based approach, we can make trustworthy AI a reality.
Conclusion:
- The work required to shift from centralized, privacy-invading AI to an era of trustworthy AI that respects people can seem daunting, but it is essential. Fortunately, we know that this kind of shift is feasible. Two decades ago, a broad coalition of people succeeded at shifting personal and business computing away from a platform tightly controlled by one company and towards a more open, decentralized internet.
- Several points in this paper can be distilled down to a few big takeaways. We need to transition from discussion to action on trustworthy AI. We need to mobilize not just engineers and regulators, but also everyday people, investors, and entrepreneurs. We need to make it easy and desirable for people to switch to services that are truly trustworthy, ensuring that companies aren’t just “trust washing.” Finally, we need to focus on not just the individual harms of AI, but also the collective harms — how these systems intersect with society at large.