An article written by a company involved in developing software used in research talks about the concept of “ethically designed algorithms” which can be administered by an organisation that is akin to the “FDA of Algorithms” (Andrew Tutt, 2016).
The principles of ethically designed algorithms, as described in the article, embodies the following elements:
Make available externally visible avenues of redress for adverse individual or societal effects of an algorithmic decision system, and designate an internal role for the person who is responsible for the timely remedy of such issues.
Ensure that algorithmic decisions as well as any data driving those decisions can be explained to end-users and other stakeholders in non-technical terms.
Identify, log, and articulate sources of error and uncertainty throughout the algorithm and its data sources so that expected and worst case implications can be understood and inform mitigation procedures.
Ensure that algorithmic decisions do not create discriminatory or unjust impacts when comparing across different demographics (e.g. race, sex, etc).
Is there anything similar that has been developed for research in similar fields (e.g. Psychology or Medical Research) that is suitable for adaptation to UX design? Or does something like this already exist and is used?