NSU Python Data Mining Fairness and Classification Task
Description
Hi, I’m happy to work with you on this assignment however, please take a look all of the files that I gave to you carefully and let me know if you can do it or not. I’m not gonna waste your time and please respect my time also. Best,
Overview and Goals:
The objectives of this assignment are as follows:
Implement a classification task
Diagnose the fairness properties of the classifiers
Test few mitigation strategies
- Detailed Description:
- In this assignment, we will use the Adults data set that contains census-based features. The label to predict is whether an individual makes over 50K.
- There are 3 parts in this task
Part 1: Classification
The data has 13 features and one outcome. There are 32561 data points. The objective is to develop a classifier to predict whether an individual makes over 50K. Your classifier should return 1 if an individual makes more than 50K and 0 otherwise.
For evaluation purposes (Leaderboard Ranking) we will use the simple accuracy metric comparing the predictions submitted by you on the test set with the ground truth. Some things to note:
The public leaderboard shows results for 50% of randomly chosen test instances only. This is standard practice in data mining challenges to avoid gaming of the system and prevent overfitting.
The private leaderboard will be released after the deadline evaluates all the entries in the test set. In a 24-hour cycle, you are allowed to submit a prediction file 10 times only.
format.csv shows an example of a submission file.
- Your final submission should be similar to format.csv with same number of rows but the prediction should be generated by your developed model.
- Here are few guidelines that should help in developing a high accuracy classifier
Think on how you encode and normalize the data
Explore different classes of classifiers (decision trees, random forest, boosted classifiers, SVM, logistic regression, KNN…)
For each method that you explore, explain how you choose its hyper-parameter
- In your report, show for each method you explore what is the best accuracy and f1-score you obtain.
- You can use any library of your choice.
- Part 2: Fairness Diagnosis
- For each of the classifiers presented in part 1, report their demographic disparity, inequality of odds and opportunity for two sensitive attributes: gender and race. Conclude on the fairness properties of the various classifiers you explore in part 1 and the likely sources of biases.
For this part, you need to write your own functions to compute the three fairness metrics.
Part 3: Fairness Mitigation
For your state-of-the-art classifier in part 1 (best accuracy), try the following mitigation strategies:
Remove the sensitive attribute from the features
Identify the attributes that correlate the most with the sensitive attribute and remove them from the features. Remove also the sensitive attribute.
For both strategies, report the new accuracy, F1-score, demographic disparity, equality of odds and opportunity. Comment on whether the mitigation strategies reduce unfair outcomes (why? why not?); and, whether you observe an accuracy-fairness trade-off.
- Rules
- This is an individual assignment. Discussion of broad level strategies is allowed but any copying of prediction files and source code will result in an honor code violation.
Feel free to use the programming language of your choice for this assignment.
While you can use libraries and templates for
- dealing with input data and performing the classification in part 1, you need to write your own fairness metrics in part 2.
Have a similar assignment? "Place an order for your assignment and have exceptional work written by our team of experts, guaranteeing you A results."