We need to design a sense of distrust of the AI ​​system to make it safer

[ad_1]

Interestingly, you are talking about how to actively design a sense of distrust of the system in this situation to make it more secure.

Yes, this is what you have to do. In fact, we are currently trying to experiment around the idea of ​​denial of service. We have no results yet, and we are working hard to solve some ethical issues. Because once the results are discussed and published, we have to explain why sometimes you may not want AI to deny service. If someone really needs the service, how can I delete it?

But this is an example of Tesla’s distrust. The denial of service will be: I created your trust profile, and I can do this based on the number of times you stop or leave the grip wheel. With these disengaged information, I can model when you are completely in this state of trust. We do this not with Tesla data, but with our own data. And at some point, the next time you drive, you will get a denial of service. You are not authorized to access the system for X time periods.

It’s almost like you punish a teenager by taking your phone. You know that if you link teenagers to their communication methods, you don’t want them to do anything.

What other mechanisms have you explored to increase distrust in the system?

Another method we explore is roughly called interpretable AI, in which the system provides an explanation about some of its risks or uncertainties. Because all these systems have uncertainty, they are not 100%. And the system knows when it is uncertain. Therefore, it can provide information in a way that humans can understand, so people will change their behavior.

For example, suppose I am a self-driving car, and I have all the map information, and I know that certain intersections are more prone to accidents than others. When we approached one of them, I would say: “We are approaching an intersection and 10 people died last year.” You explain it in a way that makes someone leave, “Oh wait, maybe I should be more To understanding.”

We have already talked about your concerns about our excessive trust in these systems. Is there any other side, is there any benefit?

Negative factors are indeed related to prejudice. This is why I always talk about prejudice and trust alternately. Because if I trust these systems too much, and the decisions made by these systems for different groups of people have different results (for example, medical diagnostic systems are different between men and women), then the system we are creating now will exacerbate what we currently have Inequality. That is a problem. When you link it to things related to health or transportation, both can lead to life-or-death situations, and a wrong decision can actually prevent you from recovering. Therefore, we really have to fix it.

On the positive side, the automated system is better than the average person.I think they can even Better, but I personally prefer to interact with the AI ​​system in certain situations, rather than interact with certain people in certain situations. Like, I know it has some problems, but give me AI. Give me the robot. They have more data; they are more accurate. Especially if you have a novice. This is a better result. The results may not be equal.

In addition to your robotics and AI research, you also strongly support increasing diversity in this field throughout your career. You started a program 20 years ago to coach junior high school girls at risk, long before many people thought about it. Why is this important to you, and why is it important in this field?

This is very important to me because I can determine a certain time in my life that basically allows me to be exposed to engineering and computer science. I don’t even know it’s the same thing. This is why I never encountered the problem of knowing I could do it. Therefore, I always feel that it is my responsibility to do the same thing for me. As I grew older, I noticed that many people in the room did not look like me. So I realized: wait, there must be a problem here, because people just don’t have role models, they don’t have access rights, and they don’t even know that this is the problem.

Why it is important to this field is because everyone has different experiences. It’s like I haven’t even thought of human-computer interaction before. Not because I am smart. This is because I look at this issue in a different way. When I talk to people with different opinions, it’s like, “Oh, let’s try to combine and find the best of both worlds.”

Airbags have killed more women and children. why is that? Well, I would say this is because someone didn’t say in the room: “Hey, why don’t we test the women in the front seat?” There are many questions that have killed some people or are harmful to some people. What I want to say is that if you go back, it’s because you don’t have enough people who will say, “Hey, have you considered?” Because they communicate based on their own experience, environment and community.

How do you hope that AI and robotics research will evolve over time? What is your vision in this field?

If you think about coding and programming, almost everyone can do it. There are so many organizations that love Code.org. Resources and tools are there. I hope that one day I can talk to a student and ask me: “Do you know about AI and machine learning?” They said, “H, I have done this since the third grade!” I want to be so shocked because that’s great . Of course, then I have to think about what my next job is, but this is another matter.

But I think that when you have tools with coding, artificial intelligence and machine learning, you can create your own work, you can create your own future, or you can create your own solutions. That will be my dream.

[ad_2]

Source link