Explores the interplay between humans and AI Algorithms to help build accountability and trust.
A version of the famous trolley issue on the Impact of artificial intelligence in everyday life
which offers a philosophical choice between two bad consequences, was brought up during a discussion on ethical artificial intelligence. Assume a self-driving car is traveling down a tiny alley with an old woman on one side and a little child on the other, with no way to thread between them without causing a fatality. Instead of considering the point of contact, a self-driving car could have avoided choosing between two poor possibilities by making a decision earlier the speaker pointed out that while approaching the alley, the car could have detected that the space was narrow and decreased to a safe speed. Recognizing that current AI Algorithms techniques are often akin to the trolley problem, relying on downstream regulation such as responsibility after someone is left with no viable options. Engineering systems are not isolated from the social systems in which they intervene. Ignoring this truth may result in instruments that are either ineffective or, more concerning, hazardous when deployed.
Social media algorithm auditing
To get a sense of what AI Algorithms entail, imagine that regulators mandate that any public health content on vaccines not be dramatically different for people who lean left or right ideologically. How should auditors ensure that a social media network is compliant with the law? Is it possible to make a platform comply with the regulation without hurting its bottom line? What impact does compliance have on the content that people see?
Designing an auditing system is complicated in part due to the enormous number of stakeholders involved in social media. Auditors must examine the AI Algorithms without gaining access to sensitive user information. They also have to work around problematic trade secrets, which can prohibit them from getting a good look at the AI Algorithms they're auditing because it's protected by law. Other factors must be considered, such as balancing the removal of false material with the protection of free expression.
To address these difficulties, an auditing technique that requires only black-box access to the social media AI algorithm (protecting trade secrets), does not alter material (avoiding censorship issues) and does not require access to users (protecting users' privacy). The team looked at the qualities of their auditing procedure during the design phase and discovered that it ensures a desirable property called decision robustness. They demonstrate that a platform may pass the audit without sacrificing profits, which is good news for the platform. They also discovered that the audit incentivizes the platform to display users varied material, which has been shown to limit the spread of disinformation, counter echo chambers, and more.
Who gets the good results and who gets bad?
Some platforms, such as job-search engines or ride-hailing applications, are part of a matching market, which utilizes an AI algorithm to match one group of people (such as workers or passengers) with another (such as employers or drivers). Individuals frequently develop matching preferences as a result of trial and error. Workers develop their preferences for the kind of occupations they want in labor markets, while businesses learn their preferences for the qualifications they seek from workers.
Competition, on the other hand, can impair learning. Workers with a certain background may never gain the knowledge they need to make an informed decision about whether or not they want to work in tech if they are continually denied positions in tech due to heavy competition for tech jobs. In the same way, IT employers may never see or learn what these people could do if hired. It is possible to achieve a stable outcome (workers aren't incentivized to leave the matching market), minimal regret (workers are content with their long-term outcomes), fairness (happiness is evenly distributed), and high social welfare by modeling such matching markets.
Surprisingly, it is not evident that stability, minimal regret, fairness, and high social welfare can all be achieved at the same time. Another significant component of the study was determining when all four criteria may be met at the same time and investigating the implications of those scenarios. A system that takes steps to address bias, whether in computing or the community, has legitimacy and confidence. Accountability, legitimacy, and trust are all important elements in society, and they will ultimately determine whether institutions survive over time.