Model Explainability in AWS Sagemaker

Amazon SageMaker Clarify provides tools to help explain how machine learning (ML) models make predictions. These tools can help ML modelers and developers and other internal stakeholders understand model characteristics as a whole prior to deployment and to debug predictions provided by the model after it's deployed.

Transparency about how ML models arrive at their predictions is also critical to consumers and regulators who need to trust the model predictions if they are going to accept the decisions based on them.

SageMaker Clarify uses a model-agnostic feature attribution approach, which you can used to understand why a model made a prediction after training and to provide per-instance explanation during inference.

The implementation includes a scalable and efficient implementation of SHAP, based on the concept of a Shapley value from the field of cooperative game theory that assigns each feature an importance value for a particular prediction.

What is the function of an explanation in the machine learning context? An explanation can be thought of as the answer to a Why question that helps humans understand the cause of a prediction. In the context of an ML model, you might be interested in answering questions such as:

  • “Why did the model predict a negative outcome such as a loan rejection for a given applicant?”

  • “How does the model make predictions?”

  • “Why did the model make an incorrect prediction?"

  • "Which features have the largest influence on the behavior of the model?”

Feature Attributions that Use Shapley Values

SageMaker Clarify provides feature attributions based on the concept of Shapley value. You can use Shapley values to determine the contribution that each feature made to model predictions. These attributions can be provided for specific predictions and at a global level for the model as a whole.

For example, if you used an ML model for college admissions, the explanations could help determine whether the GPA or the SAT score was the feature most responsible for the model’s predictions, and then you can determine how responsible each feature was for determining an admission decision about a particular student.

SageMaker Clarify has taken the concept of Shapley values from game theory and deployed it in a machine learning context. The Shapley value provides a way to quantify the contribution of each player to a game, and hence the means to distribute the total gain generated by a game to its players based on their contributions. In this machine learning context, SageMaker Clarify treats the prediction of the model on a given instance as the game and the features included in the model as the players.

For a first approximation, you might be tempted to determine the marginal contribution or effect of each feature by quantifying the result of either dropping that feature from the model or dropping all other features from the model. However, this approach does not take into account that features included in a model are often not independent from each other. For example, if two features are highly correlated, dropping either one of the features might not alter the model prediction significantly.

Create Feature Attribute Baselines and Explainability Reports

Use SHAPConfig to create the baseline. In this example, the mean_abs is the mean of absolute SHAP values for all instances, specified as the baseline. You use DataConfig to configure the target variable, data input and output paths, and their formats.


shap_config = clarify.SHAPConfig(baseline=[test_features.iloc[0].values.tolist()],
                                 num_samples=15,
                                 agg_method='mean_abs')

explainability_output_path = 's3://{}/{}/clarify-explainability'.format(bucket, prefix)

explainability_data_config = clarify.DataConfig(s3_data_input_path=train_uri,           
                 s3_output_path=explainability_output_path,
                 label='Target',                               
                 headers=training_data.columns.to_list(),
                 dataset_type='text/csv')


clarify_processor.run_explainability(
        data_config=explainability_data_config,                             
        model_config=model_config,                          
       explainability_config=shap_config
)

References:

Sagamaker Developer Guide