[ad_1]
In the past year, the pandemic that has swept the world has made many things ruthless, with varying levels of preparedness. Collective attitudes towards health, technology and science; and huge financial and social inequalities. As the world continues to respond to the covid-19 health crisis, and even some places begin to gradually resume work, study, travel, and entertainment, it is vital to resolve the competing priorities of protecting public health fairly while ensuring privacy.
The ongoing crisis has led to rapid changes in work and social behavior, as well as increasing reliance on technology. Now, more than ever, it is important for companies, governments, and society to exercise caution when applying technology and handling personal information. The rapid expansion of artificial intelligence (AI) shows how adaptive technologies can easily interact with humans and social institutions in potentially risky or unequal ways.
Yoav Schlesinger, Salesforce Ethical AI Practice Leader, said: “After the pandemic, our relationship with the entire technology will change dramatically.” “There will be a negotiation process between people, companies, governments, and technology; in the new social data contract , Will renegotiate their data flow between all these parties.”
AI in action
As the covid-19 crisis begins to spread in early 2020, scientists hope that AI will support multiple medical uses, such as identifying potential vaccine candidates or therapeutic drugs, helping detect potential covid-19 symptoms, and allocating rare resources (such as intensive care units) -Nursing beds and ventilators. Specifically, they rely on AI to enhance the analytical capabilities of the system to develop the most advanced vaccines and treatment methods.
Although advanced data analysis tools can help extract insights from large amounts of data, the results are not always fairer. In fact, AI-driven tools and the data sets they use can perpetuate inherent biases or systemic inequalities. Throughout the pandemic, agencies such as the Centers for Disease Control and Prevention and the World Health Organization have collected a lot of data, but these data may not accurately represent the disproportionately and negatively affected populations, including blacks, browns, and indigenous peoples. Schlesinger said that people have not made some progress in diagnosis.
For example, biometric wearable devices such as Fitbit or Apple Watch have shown promise in detecting potential covid-19 symptoms, such as changes in temperature or oxygen saturation. However, these analyses often rely on flawed or limited data sets, and may introduce bias or unfairness, which can have a disproportionate impact on vulnerable populations and communities.
“There are some studies that show Green LED light It takes more time to read pulses and oxygen saturation on dark skin,” Schlesinger said, referring to a semiconductor light source. “Therefore, for those with black and brown skin, it may be possible to capture obvious symptoms. Not doing well. “
Artificial intelligence has shown greater effectiveness in helping to analyze huge data sets. A team from the University of Southern California’s Viterbi School of Engineering has developed an AI framework to help analyze covid-19 vaccine candidates. After identifying 26 potential candidates, it narrowed it down to the 11 most likely to succeed. The data source of the analysis is the immune epitope database, which includes more than 600,000 infectious determinants from more than 3,600 species.
Other Viterbi researchers are applying AI to more accurately decipher cultural regulations and better understand the social norms that guide the behavior of races and ethnic groups. Due to religious ceremonies, traditions and other social customs that can promote the spread of the virus, this may have a significant impact on the performance of certain populations during crises such as pandemics.
The research of chief scientists Kristina Lerman and Fred Morstatter is based on Moral foundation theory, It describes the “intuitive ethics” that constitute a cultural moral structure (such as care, fairness, loyalty, and authority) that help inform individuals and groups of behavior.
“Our goal is to develop a framework that allows us to gain a deeper understanding of the dynamics that drive the cultural decision-making process,” Morstatter said USC report. “By doing this, we have generated predictions with a more cultural background.”
The study also looked at how to deploy AI in an ethical and fair manner. Schlesinger said: “Most people, but not everyone, are interested in making the world a better place.” “Now we must enter a new level, what goals we want to achieve, what results we want to see How will we measure success and what will it be like?”
Solve ethical issues
Schlesinger said that asking about the collected data and assumptions about the AI process is critical. “We are talking about achieving fairness through awareness. At each step of the process, you are making value judgments or assumptions that will weigh your results in a specific direction,” he said. “This is the fundamental challenge of building ethical artificial intelligence, which is to look at all places where humans are biased.”
Part of the challenge is to rigorously check the data set that informs the AI system. It is necessary to understand the data source and data composition, and answer the following questions: How is the data structured? Does it include various stakeholders? What is the best way to deploy data into the model to minimize bias and maximize fairness?
When people return to work, employers may now Sensing technology using built-in AI, Including thermal imaging cameras to detect high temperatures; audio sensors to detect coughing or increased voice that causes respiratory droplets; and video streaming to monitor hand washing procedures, physical distance regulations, and mask requirements.
This monitoring and analysis system not only faces technical accuracy challenges, but also human rights, Privacy, security and trust. The drive to strengthen surveillance has been a disturbing side effect of the pandemic. Government agencies have used surveillance camera lenses, smartphone location data, credit card purchase records, and even passive temperature scans in crowded public places such as airports to help track the activities of people who may be infected or exposed to covid-19 and build viruses Transmission path chain.
“The first question that needs to be answered is not only that we can do this, should we?” Schlesinger said. “Scanning personal biometric data without the individual’s consent can cause ethical concerns, even if it is positioned as a benefit of greater benefit. As a society, we should have a lively discussion about whether there are good reasons to implement these technologies in the first place. .”
What the future will look like
As society returns to a near-normal state, it is time to fundamentally re-evaluate the relationship with data and establish new standards for data collection, as well as the appropriate use and potential abuse of data. When building and deploying AI, technicians will continue to make those necessary assumptions about data and processes, but should question the basis of the data. Is the data from a legitimate source? Who assembled it? What assumption is it based on? Is the presentation accurate? How to protect the privacy of citizens and consumers?
As the deployment of AI becomes more and more widespread, we must consider how to enhance trust. One way is to use AI to enhance human decision-making capabilities, rather than completely replace human input.
Schlesinger said: “There will be more questions about the role that artificial intelligence should play in society, the relationship with humans, and the tasks suitable for humans and tasks suitable for artificial intelligence.” “In certain areas. , The capabilities of artificial intelligence and its ability to enhance human capabilities will accelerate our trust and dependence. Where AI will not replace humans, but will redouble our efforts, this is the next horizon.”
In some cases, people are always required to participate in decision-making. Schlesinger said: “In regulated industries such as healthcare, banking, and finance, someone needs to maintain compliance.” Without the input of clinicians, you can’t just deploy AI to make care decisions. Although we want to believe that AI can do this, AI has not yet empathized, and may never be.
It is essential that the data collected and created by AI does not aggravate but minimizes inequality. There must be a balance between finding artificial intelligence methods to help accelerate human and social progress, promoting fair actions and countermeasures, and simply recognizing that certain problems will require human solutions.
This content was produced by Insights, the custom content department of MIT Technology Review. It was not written by the editors of MIT Technology Review.
[ad_2]
Source link