Login
Request a Demo

Blue J’s Charter on

Algorithmic Responsibility

Adopted on February 14, 2020

Avoid creating or reinforcing inappropriate bias

We rely on established human rights norms for determining what could be inappropriate bias. In particular, we proactively seek to identify and avoid unjust impacts on parties on the basis of protected characteristics such as race, national or ethnic origin, colour, religion or creed, age, sex, sexual orientation, gender identity or expression, family status or disability.

Continuously monitor outcomes

We monitor the outcomes of algorithmic predictions in order to ensure continued accuracy and to guard against unintentional outcomes. We are mindful of the potential impact of feedback loops and have measures in place to avoid them. We conduct regular testing in order to ensure that the data used to generate algorithmic predictions are relevant, accurate and up-to-date.

Provide transparent data and predictions

We strive for transparency so that our users know how we make predictions. We use algorithms that can be explained to and understood by our users. We are transparent about where our data comes from and provide meaningful explanations for how results are reached. We give users descriptions of the variables that we use in our algorithms.

Allow for human oversight and control

Our algorithms are subject to human oversight and control. Our predictions are designed to assist humans with decision-making. Decisions are made with a human-in-the-loop, and our systems allow for human intervention. To aid in this, we collect feedback from users and respond promptly to reports of unexpected outcomes.

Improve the human condition

Advances in machine learning will have transformative impacts on how many decisions are made. We promote algorithmic tools that have the potential to improve the lives of individuals and the more efficient, fair, and transparent functioning of institutions.

Invest in data quality

We invest in quality at every stage of our processes to produce the most reliable predictions possible. We verify source data and validate models using best-in-class methods.

Insist on high standards of accuracy

We strive to ensure that our products are accurate. Our products must meet internal accuracy standards before they are released in order to ensure predictions are of the highest quality.

Safeguard privacy and security

Our users’ privacy is one of our highest priorities. We use current industry-standard encryption technology to safeguard users’ information.

RELAI (Reliable, Ethical, Legal AI) Committee Members 2020

Mathew Armstrong

Kim Condon

Adam Haines

Brett Janssen

Sabrina Malek

Nicholas Mirotchnick

Jesse Provost

Mike Rodgers

Samantha Santoro

Albert Yoon