System Error

While expanding the scope for “regulation” of technology appears contrary to what silicon valley wants you to believe, there is enough merit in it. I am linking to a fascinating discussion around it on YouTube and highlighting specific issues from the transcript:

And how do we want to balance the privacy that we care about against the potential benefits of access to data? And how do we want to take advantage of what automation enables, but preserve people’s ability to find meaningful work and meet the needs of their families?

Those are societal trade-offs, and trade-offs that can’t really be left to companies on their own. If you want to call that regulation, fine, but recognise that in disparaging regulation, you’re effectively disparaging the role of democracy and the role of the collective in basically making choices that benefit all of us.

If you want to consider a simple analogy that we sometimes like to talk about is driving on the roadways: if you were to tell someone “there’s no rules on the road, you can either make the choice to drive and you should be careful yourself, or just don’t drive,” you see what the flaws is there, right? Because there’s value in driving, and telling people that they should just be personally responsible for driving doesn’t solve the problem.

What we got was a set of regulations that gave us things like lanes and stop signs and streetlights, things like that. They created a system that made driving safer for everyone. Now, at the same time, you still have your personal choice around how safely you drive, how quickly you want to drive, and also whether or not you want to drive at all. But we get a system that works better for everyone because we got regulation. That’s the moment we’re at with technology.

This is a wonderful insight from the professors. Regulation is deemed a “dirty word” akin to “control”. No regulation is perfect, and there are trade-offs’s. Howvever, institutional frameworks mandate that unrestricted data collection doesn’t lead to power concentration and harms that accrue from exclusion of people. Healthcare is deemed a universal right, and algorithms threaten to undermine institutional and personal autonomy.

The authors raise another valid point on how AI acts as a mirror to our perceived historical biases.

Also, are they being audited for things like bias that might exist in the data or reinforcing historical patterns that we don’t actually want to see, but that we think somehow are more objective because they’re made by a computer? Really what AI gives us, is it gives us a mirror to our society. There was a bunch of historical data that’s fed into these systems, which then gets turned into models that makes future decisions. What that means is we’re codifying the past. Part of codifying the past means putting a mirror to ourselves and understanding what have we actually done that we like and we don’t like? What do we want to change?

And the only way we can do that is by having structures in place that force us to actually look critically at these algorithms, how they’re used, what their impacts are, and even tease apart details around the particular predictions they make, so that we can actually ensure a future that’s positive for everyone, rather than just reinforcing the past and concentrating that power into the hands of a few people who know how to work with AI.

AI is being weaponized for overt and covert political objectives. It is being pushed in healthcare (from a consumer point of view) with the constant din of serving “better outcomes” because the existing processes are “inefficient”. When AI is being hyped up as a magic bullet, its shortcomings become even more glaring. I personally believe healthcare should recognize its own internal limitations and then use technology to make processes more efficient.

Another insight:

So what does that mean for a future beyond the current moment? It means we need to address all three of those things. We need a mindset of the technologist that’s not firmly rooted in optimisation, but that actually grapples with the relevant trade-offs, the different values that are potentially being encoded or could be encoded in technologies, and you need to encompass that or approach that through the work you do educating technologists and in companies themselves.

Then the third change is going to come from a government that is no longer asleep at the wheel. And we’re seeing the very beginnings of that in the moment that we’re in now, but we’re going to need a government that is capable, and adaptable, and flexible, so they can govern technology in democratic ways. That’s going to be generations of work, not just to solve the problems of the moment, but to reboot the structure of our government so it can navigate technology going forward.

The last point is over-optimistic that governments will need to become “flexible”. Those are tall promises, unless someone comes up with the idea to reduce bureaucracy, but not regulatory oversight.

Highly recommended read in its entirety!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.