From e565d8b6eaf3320838087e08a573642c7c38248f Mon Sep 17 00:00:00 2001 From: Oumaima Fisaoui <48260689+Oumaimafisaoui@users.noreply.github.com> Date: Thu, 19 Sep 2024 13:37:16 +0100 Subject: [PATCH] Chore(AI): fixed the issue --- subjects/ai/credit-scoring/README.md | 2 +- subjects/ai/credit-scoring/audit/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/subjects/ai/credit-scoring/README.md b/subjects/ai/credit-scoring/README.md index 5f76611ec..829804c49 100644 --- a/subjects/ai/credit-scoring/README.md +++ b/subjects/ai/credit-scoring/README.md @@ -22,7 +22,7 @@ There are 3 expected deliverables associated with the scoring model: - The trained machine learning model with the features engineering pipeline: - Do not forget: **Coming up with features is difficult, time-consuming, requires expert knowledge. ‘Applied machine learning’ is basically feature engineering.** - - The model is validated if the **AUC on the test set is higher than 75%**. + - The model is validated if the **AUC on the test set is higher than 50%**. - The labelled test data is not publicly available. However, a Kaggle competition uses the same data. The procedure to evaluate test set submission is the same as the one used for the project 1. #### b - Kaggle submission diff --git a/subjects/ai/credit-scoring/audit/README.md b/subjects/ai/credit-scoring/audit/README.md index 0c363cdd8..694c53ae7 100644 --- a/subjects/ai/credit-scoring/audit/README.md +++ b/subjects/ai/credit-scoring/audit/README.md @@ -59,7 +59,7 @@ project ```prompt python predict.py - AUC on test set: 0.76 + AUC on test set: 0.50 ```