OPINION: Two SoCal AI Experts Share Their Insights and Opinions on the Challenge of Developing Ethics Around AI. Neil Sahota and Brian Dolan Share Their Thoughts and Concerns…

AI has permeated all aspects of our lives. And continues to do so. Yet, while this powerful tech has been around for a while, there’s still no universal framework or guidelines governing its use and ethics.

In the meantime, there’s a collaborative effort underway to make SoCal an AI hub. See this article.

And, there are several AI-based companies that call SoCal home, like Anduril (military defense), Veritone (IT) and AWM SmartShelf (marketing). All operate with their own agendas. All have the capability to track identifiable human features. inter-TECH-ion has contacted some of them, with no response.

So, how does AI function with no universal framework? Two SoCal AI experts, Brian Dolan and Neil Sahota, share their insights on how the ethics around AI are evolving and discuss the ramifications of this tech.

Dolan is a mathematical entrepreneur, who’s been building enterprise-grade AI for more than 20 years. The company he founded in LA is Verdant AI. Here’s what Dolan has to say:

Regarding organizations already working on AI and ethics – Open AI has an ethics committee and there’s Humanitarian AI. But there’s nothing that’s enforceable. (Some entity) will ultimately get around to create laws to govern its use and construction….laws are typically written when someone crosses a line.

OpenAI is an AI research lab in SF. Its parent company, with the same name is a non-profit and is considered a competitor to DeepMind. It does AI research with the goal of promoting and developing “friendly” AI in a way that benefits humanity as a whole.

Humanitarian AI is a hybrid meetup community and open-source initiatives with local groups all over the world. Former U.N. staff started it to give students, developers  and others the opportunity to interact with each other and collaborate on developing back-end architecture, algorithms and datasets that are deemed “critical” to advancing humanitarian uses of AI.

We’re just getting to the point now where we’re seeing implicit bias in facial recognition, and that will spawn some laws and it should,

But we should be monitoring this model construction. The model is what makes the actual decision. You feed it the data and it comes up with a model.

Now we’re seeing facial recognition, and racial profiling is such a problem, I think it’ll make us turn around and ask where else is systemic bias happening? And I think marketing is next…..

The drug community needs to be more sensitive to ethnic bias. That’s being corrected now for marketing reasons, but not for ethical reasons. And they’ll write a law about it, eventually.

If we’re harming 22-year-old Latinas, we can’t sell (a drug) to them anymore. So, we need to find another drug. The market is causing (pharma companies) to find out where their racial biases are, but there are no laws yet. But I think this is first place where laws will be passed — to say what is the demographic that this drug will affect?

The Militarization of Tech

I think the militarization of tech has been a problem since the first military arose. The historical view is civilizations have become more moral, but have still developed these war machines.

When people talk about tech becoming a tool for mass destruction, they think of the Manhattan Project. Think there was a key cultural difference of that vs. today. The Manhattan Project unleashed a terrifying power into the world. The scientists did the research knowing there was a big insulation between what they were building and the implementation. And that the implementation would be carefully considered by a government agency. They had some buffer.

Two things have been stripped, especially over the past 20 years. Tech goes directly to implementation because it’s coming out of private companies, rather than government institutions.

So when companies come up with potentially dangerous tech, it escapes right away. There’s no longer that buffer of the U.S. government to slow it down. That’s one driving force as tech and AI have become available to build.

Over the years, government has become much less responsible in its use of these things.

Regarding Ethics

I think organizations are trying to create social movement around it right now. You can’t litigate to get people to wear masks during a pandemic. It’s the exact same problem with AI. You can’t pass a law that says you can’t use AI for evil.

There are avenues to be explored, to see what the precedent is. There’s no precedent that says something has to be used for good or bad.

I think a framework needs to be designed socially first, before it’s done judicially.

Uranium is a good example – the material itself is highly trackable. It only comes from a few places.

AI not trackable at all. Anyone with a computer who knows a little bit of math, can do it.

I don’t know who would start with an ethical framework of AI.

About Verdant AI

Verdant is a values-driven organization, so we’re very careful about the kind of tech we’re working on. We’re working on a multitude of things – mostly around environmental or health, like biomass to create clean energy and heal the soil. We use AI to help farmers increase their crop yield, reducing waste.

We’re also working with another company on addiction and recovery to help people get through rehab and help people on the road to recovery or other sorts of abuses.

A big failure in the recovery pipeline is that it’s hard, because of regulations and because of the way the commercial market around recovery is structured, it’s hard to find the right recovery house.

If you have s specific need or condition or medical condition, you need proper care. Just like I did with last company, matching cancer patients to clinical trials – trying to accomplish a similar thing – trying to help them find proper care.

AI and Culture

One of the things that’s important to me lately is that AI reflects the culture. So, when we’re building a model, we’re putting our actual culture into it. For me, when people say I built a model and it turned out to be racist, I say that’s interesting, I don’t want to put a moral judgment on these people. They didn’t do it intentionally. It just maybe didn’t occur to them that they have nothing but white male faces in their algorithm.

Because of that, when you start to examine the decisions that AI makes, you realize a lot about American culture. I don’t feel like a misogynist or racist, but I know that I’m living in a culture that has too much of that embedded.

This is where I become an isolated voice in the AI community. The past 10 years in AI have become vast data movements. Like put huge data in and it’ll teach us how to drive a car. Nothing wrong with that until you try to make it make a more sophisticated decision.

Like why are these kids in this small class under-performing on a standardized test. That’s a more complicated problem.

If I throw two trillion lines of data at that, I’m not going to solve that. Even if I collect video and sound of everything the kids do in the classroom, the real cause is outside of that observation – the socio-economic cause. Some of the students are probably learning- challenged already. It’s not the class size. It’s the selection bias. When you’re trying to solve these complicated problems, have to have the domain knowledge on what’s really causing the problem.

I have a different voice than many other AI guys. I think the Big Data movement is part of the problem because it’s just reinforcing the problems we already had. The more data I have, the more I reinforce the average, vs. what is the cause? Which is a more complicated problem.

**

And, here are the comments of Neil Sahota, an inventor, author, business advisor and angel investor. He’s also the Chief Innovation Officer at UCI’s School of Law.

The biggest challenge with ethics is defining what is “right” (or moral/ethical) use. In China, the police utilize Google glass tech with AI that provides information on each person such as their name, address, where they work, where they’ve been in the last two hours, etc.

In China, this is considered a good thing because it helps catch criminals faster and helps find lost children quicker.

However, take this solution to Europe or North America, and there’s big concerns about privacy, profiling, and abuse of information.

Back in 2018, I spoke at the UN’s Global Symposium for Regulators, and I brought up the point that there are no boundaries in the digital age. Ethics and morals are very subjective among people let alone countries.

Instilling Ethics Into Tech

To instill ethics into technology, we have to do two, interconnected things: 1) Bring a different and diverse mindset to think about how technology could be used (not just the happy path scenario), and 2) Establish a global baseline for ethical behavior that everyone would have to agree with.

This second point caused quite a commotion, but by the following year, the global government agencies were actively talking about. Until we have this baseline, it is will be incredibly difficult to make a sustained, successful push towards the ethical use of AI.

Diversity and Collaboration are Critical

It’s no secret that the lack of diversity and inclusion in developing AI solution or policy/regulation/legislation has been a killer. (There are) constant challenges going on, like interoperability and isolation. For interoperability, there’s a growing concern that more and more of the technologists are creating AI solutions that they don’t fully understand. So, when the AI does recommend something or does something (like the Facebook messenger chatbots creating their own shorthand language from English), it freaks people out (and rightfully so.)

In conjunction, there are a lot of groups and government agencies that are trying to create policies and guidelines by themselves, rather than collaborating with the other institutions, or even the private industry. By being silo-ed, it really limits the effectiveness and use of their work.

In one instance, there was an academic group that put together a 116- page checklist for software developers to use when creating any piece of software. To be honest, it was met with disbelief that this would be easily usable by anyone, and it probably hasn’t been.

We need to have this conversation. It is an incredibly difficult conversation, and it will take a LONG time to come up with a baseline of ethical standards.

Most people know that it needs to be done, but there is a lot of hesitancy to start because it is such a visceral topic. Nevertheless, we need more people willing to stand up and say this, while also starting and championing these conversations. Just as I was willing to do at the Global Symposium of Regulators, we need more people to put the challenge forward so that we can start talking about it.

Ethics is a core part of developing AI solutions. In reality, unless you’re a super-villain, no one is going build an army of evil AI robots to conquer the world.

The real challenge is having a common definition of ethics. Is it OK to use someone’s data to get them to buy more products? Sway their opinion on something like a political candidate? Identify early warning signs of a mental health issue? In theory, this could be done from the same set of data, so does that mean they all be unethical? Or all ethical?

Regarding Creating an AI Hub in OC

In creating an AI hub here, we may not solve the challenge of a global ethical baseline. However, we do have a great opportunity to bring all stakeholders to the table. We can bring in the government agencies, academics, private industry, non-profits, people from diverse backgrounds, etc. to start having these conversations, and, more importantly, identify the potential disparate impacts.

That’s really the biggest challenge because AI, and pretty much anything in life, impacts groups of people differently. Just consider that we don’t even think about how environmental issues impact people differently. (Please reference my blog on this topic). If we can’t develop this mindset in general, how will we come to an agreement on AI?

About The Author

Deirdre Newman is a long-time journalist, who's covered OC startups for a few years.

You don't have permission to register