Update about using ML models to predict student grades

I previously wrote about how there could be more transparency around how the International Baccalaureate (IB) program used ML models to predict student grades. There’s been an update since then. Turns out the UK government also used a similar algorithm to determine students’ A-level scores this year, resulting in a decrease of 40% of the teacher-predicted expected grades. There were protests all over the country from students as this affects where they are admitted for higher education, eventually leading the government to cancel the ML grades in favor of human-predicted ones.

As I outlined before, there are many steps that should have been taken and made clear to the students while deciding the best model to predict their grades. Unfortunately this has not been the case, leading to a lot of algorithmic bias, and probably overfitting to the data it was trained on. Clarity on what features and models were tested and the methodology could have prevented a lot of this confusion. Let’s all ask for more insight and interpretability from our ML models!