Ecosyste.ms: Issues

An open API service for providing issue and pull request metadata for open source projects.

GitHub / hbaniecki/adversarial-explainable-ai issues and pull requests

#64 - Fooling SHAP with Output Shuffling Attacks

Issue - State: open - Opened by hbaniecki 3 months ago
Labels: preprint

#63 - From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation

Issue - State: closed - Opened by hbaniecki 4 months ago
Labels: awaiting

#63 - From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation

Issue - State: open - Opened by hbaniecki 4 months ago
Labels: awaiting

#62 - Explainable Graph Neural Networks Under Fire

Issue - State: open - Opened by noppelmax 7 months ago
Labels: preprint

#62 - Explainable Graph Neural Networks Under Fire

Issue - State: open - Opened by noppelmax 7 months ago
Labels: preprint

#61 - On the Robustness of Global Feature Effect Explanations

Issue - State: open - Opened by noppelmax 8 months ago
Labels: awaiting

#61 - On the Robustness of Global Feature Effect Explanations

Issue - State: closed - Opened by noppelmax 8 months ago
Labels: awaiting

#60 - Unfooling SHAP and SAGE: Knockoff Imputation for Shapley Values

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting

#60 - Unfooling SHAP and SAGE: Knockoff Imputation for Shapley Values

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting

#59 - SoK: Explainable Machine Learning in Adversarial Environments

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting

#59 - SoK: Explainable Machine Learning in Adversarial Environments

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting

#57 - Foiling Explanations in Deep Neural Networks

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting

#57 - Foiling Explanations in Deep Neural Networks

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting

#55 - On the Robustness of Removal-Based Feature Attributions

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: preprint

#55 - On the Robustness of Removal-Based Feature Attributions

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: preprint

#54 - On Minimizing the Impact of Dataset Shifts on Actionable Explanations

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting

#54 - On Minimizing the Impact of Dataset Shifts on Actionable Explanations

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting

#53 - SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting

#53 - SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting

#51 - Don't trust your eyes: on the (un)reliability of feature visualizations

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: preprint

#51 - Don't trust your eyes: on the (un)reliability of feature visualizations

Issue - State: closed - Opened by hbaniecki over 1 year ago
Labels: preprint

#50 - How to Manipulate CNNs to Make Them Lie: the GradCAM Case

Issue - State: closed - Opened by hbaniecki almost 2 years ago
Labels: awaiting

#50 - How to Manipulate CNNs to Make Them Lie: the GradCAM Case

Issue - State: closed - Opened by hbaniecki almost 2 years ago
Labels: awaiting

#49 - Exploiting Fairness to Enhance Sensitive Attributes Reconstruction

Issue - State: closed - Opened by hbaniecki almost 2 years ago
Labels: awaiting

#49 - Exploiting Fairness to Enhance Sensitive Attributes Reconstruction

Issue - State: closed - Opened by hbaniecki almost 2 years ago
Labels: awaiting

#48 - Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks

Issue - State: closed - Opened by hbaniecki almost 2 years ago
Labels: awaiting

#48 - Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks

Issue - State: closed - Opened by hbaniecki almost 2 years ago
Labels: awaiting

#47 - Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks

Issue - State: closed - Opened by hbaniecki almost 2 years ago
Labels: preprint

#47 - Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks

Issue - State: closed - Opened by hbaniecki almost 2 years ago
Labels: preprint

#46 - Certifiably robust interpretation via Rényi differential privacy

Issue - State: closed - Opened by hbaniecki about 2 years ago
Labels: awaiting

#46 - Certifiably robust interpretation via Rényi differential privacy

Issue - State: closed - Opened by hbaniecki about 2 years ago
Labels: awaiting

#45 - Disguising Attacks with Explanation-Aware Backdoors

Issue - State: closed - Opened by hbaniecki about 2 years ago
Labels: awaiting

#45 - Disguising Attacks with Explanation-Aware Backdoors

Issue - State: closed - Opened by hbaniecki about 2 years ago
Labels: awaiting

#44 - On the robustness of sparse counterfactual explanations to adverse perturbations

Issue - State: closed - Opened by hbaniecki about 2 years ago
Labels: awaiting

#44 - On the robustness of sparse counterfactual explanations to adverse perturbations

Issue - State: closed - Opened by hbaniecki about 2 years ago
Labels: awaiting

#43 - Preventing deception with explanation methods using focused sampling

Issue - State: closed - Opened by hbaniecki about 2 years ago - 1 comment
Labels: awaiting

#43 - Preventing deception with explanation methods using focused sampling

Issue - State: closed - Opened by hbaniecki about 2 years ago - 1 comment
Labels: awaiting

#40 - OpenXAI: Towards a Transparent Evaluation of Model Explanations

Issue - State: closed - Opened by hbaniecki over 2 years ago - 1 comment
Labels: awaiting

#40 - OpenXAI: Towards a Transparent Evaluation of Model Explanations

Issue - State: closed - Opened by hbaniecki over 2 years ago - 1 comment
Labels: awaiting

#39 - Fooling SHAP with Stealthily Biased Sampling

Issue - State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint

#39 - Fooling SHAP with Stealthily Biased Sampling

Issue - State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint

#38 - Unfooling Perturbation-Based Post Hoc Explainers

Issue - State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint

#38 - Unfooling Perturbation-Based Post Hoc Explainers

Issue - State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint

#37 - Attribution-based Explanations that Provide Recourse Cannot be Robust

Issue - State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint

#35 - Benchmarking Perturbation-based Saliency Maps for Explaining Atari Agents

Issue - State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint

#35 - Benchmarking Perturbation-based Saliency Maps for Explaining Atari Agents

Issue - State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint

#34 - Backdooring Explainable Machine Learning

Issue - State: closed - Opened by hbaniecki almost 3 years ago
Labels: preprint

#34 - Backdooring Explainable Machine Learning

Issue - State: closed - Opened by hbaniecki almost 3 years ago
Labels: preprint

#32 - An Adversarial Approach for Explaining the Predictions of Deep Neural Networks

Issue - State: closed - Opened by hbaniecki about 3 years ago
Labels: awaiting

#29 - Defense Against Explanation Manipulation

Issue - State: closed - Opened by hbaniecki about 3 years ago - 1 comment
Labels: preprint

#29 - Defense Against Explanation Manipulation

Issue - State: closed - Opened by hbaniecki about 3 years ago - 1 comment
Labels: preprint

#27 - ICLR 22

Issue - State: closed - Opened by hbaniecki about 3 years ago
Labels: conference

#27 - ICLR 22

Issue - State: closed - Opened by hbaniecki about 3 years ago
Labels: conference

#26 - On Guaranteed Optimal Robust Explanations for NLP Models

Issue - State: closed - Opened by hbaniecki about 3 years ago

#26 - On Guaranteed Optimal Robust Explanations for NLP Models

Issue - State: closed - Opened by hbaniecki about 3 years ago

#25 - Robust and Stable Black Box Explanations

Issue - State: closed - Opened by hbaniecki about 3 years ago

#25 - Robust and Stable Black Box Explanations

Issue - State: closed - Opened by hbaniecki about 3 years ago

#22 - Robust Attribution Regularization

Issue - State: closed - Opened by hbaniecki about 3 years ago

#21 - Crowdsourcing and Evaluating Concept-driven Explanations of Machine Learning Models

Issue - State: closed - Opened by hbaniecki about 3 years ago
Labels: awaiting

#21 - Crowdsourcing and Evaluating Concept-driven Explanations of Machine Learning Models

Issue - State: closed - Opened by hbaniecki about 3 years ago
Labels: awaiting

#20 - NeurIPS 21

Issue - State: closed - Opened by hbaniecki over 3 years ago
Labels: conference

#19 - Towards robust explanations for deep neural networks

Issue - State: closed - Opened by hbaniecki over 3 years ago

#17 - Manipulating and Measuring Model Interpretability

Issue - State: closed - Opened by hbaniecki over 3 years ago

#16 - FAccT '21

Issue - State: closed - Opened by hbaniecki over 3 years ago

#15 - When and How to Fool Explainable Models (and Humans) with Adversarial Examples

Issue - State: closed - Opened by hbaniecki over 3 years ago
Labels: preprint

#14 - Counterfactual Explanations Can Be Manipulated

Issue - State: closed - Opened by hbaniecki over 3 years ago - 1 comment

#11 - Do Feature Attribution Methods Correctly Attribute Features?

Issue - State: closed - Opened by hbaniecki over 3 years ago

#10 - ICML 2021

Issue - State: closed - Opened by hbaniecki almost 4 years ago

#5 - AAAI 2021

Issue - State: closed - Opened by hbaniecki over 4 years ago

#4 - NeurIPS 2020

Issue - State: closed - Opened by hbaniecki over 4 years ago - 2 comments