Her new organization, Distributed Artificial Intelligence Research Institute (DAIR), aims to both document harms and develop a vision for AI applications that can have a positive impact on the same groups. Gebru helped pioneer research into facial recognition software’s bias against people of color, which prompted companies like Amazon to change its practices. A year ago, she was fired from Google for a research paper critiquing the company’s lucrative AI work on large language models, which can help answer conversational search queries.
The funders are compelling: Ford Foundation, Open Society Foundation and Kapor Foundation. Each one of them is focused on reducing the “harms” arising from the AI. I am sure she’d get a lot of attention, and corporations are even being sued for their “ex-slogans”. In this anything goes lawsuit funded craze, AI and ML will suffer because the cost of training the datasets is becoming unsustainable. It will be priced out before we can even realise what’s happened. Therefore, I foresee that organisations with specific levers, best marketing budgets or memory recall in the crowded data-space will get to call the shots.
These are the “research” issues that require careful consideration. What is the ideal size of the databases? What is the goal to avoid the local minima, etc.
Good luck to her efforts! I am glad that she’s investing to pursue her passions.