Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / hbaniecki/adversarial-explainable-ai issues and pull requests
#65 - Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change
Issue -
State: open - Opened by hbaniecki 2 months ago
Labels: awaiting
#64 - Fooling SHAP with Output Shuffling Attacks
Issue -
State: open - Opened by hbaniecki 3 months ago
Labels: preprint
#63 - From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation
Issue -
State: closed - Opened by hbaniecki 4 months ago
Labels: awaiting
#63 - From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation
Issue -
State: open - Opened by hbaniecki 4 months ago
Labels: awaiting
#62 - Explainable Graph Neural Networks Under Fire
Issue -
State: open - Opened by noppelmax 7 months ago
Labels: preprint
#62 - Explainable Graph Neural Networks Under Fire
Issue -
State: open - Opened by noppelmax 7 months ago
Labels: preprint
#61 - On the Robustness of Global Feature Effect Explanations
Issue -
State: open - Opened by noppelmax 8 months ago
Labels: awaiting
#61 - On the Robustness of Global Feature Effect Explanations
Issue -
State: closed - Opened by noppelmax 8 months ago
Labels: awaiting
#60 - Unfooling SHAP and SAGE: Knockoff Imputation for Shapley Values
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#60 - Unfooling SHAP and SAGE: Knockoff Imputation for Shapley Values
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#59 - SoK: Explainable Machine Learning in Adversarial Environments
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#59 - SoK: Explainable Machine Learning in Adversarial Environments
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#58 - Interpretation Attacks and Defenses on Predictive Models Using Electronic Health Records
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#58 - Interpretation Attacks and Defenses on Predictive Models Using Electronic Health Records
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#57 - Foiling Explanations in Deep Neural Networks
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#57 - Foiling Explanations in Deep Neural Networks
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#56 - Focus-Shifting Attack: An Adversarial Attack That Retains Saliency Map Information and Manipulates Model Explanations
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#56 - Focus-Shifting Attack: An Adversarial Attack That Retains Saliency Map Information and Manipulates Model Explanations
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#55 - On the Robustness of Removal-Based Feature Attributions
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: preprint
#55 - On the Robustness of Removal-Based Feature Attributions
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: preprint
#54 - On Minimizing the Impact of Dataset Shifts on Actionable Explanations
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#54 - On Minimizing the Impact of Dataset Shifts on Actionable Explanations
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#53 - SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#53 - SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#52 - "Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#52 - "Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: awaiting
#51 - Don't trust your eyes: on the (un)reliability of feature visualizations
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: preprint
#51 - Don't trust your eyes: on the (un)reliability of feature visualizations
Issue -
State: closed - Opened by hbaniecki over 1 year ago
Labels: preprint
#50 - How to Manipulate CNNs to Make Them Lie: the GradCAM Case
Issue -
State: closed - Opened by hbaniecki almost 2 years ago
Labels: awaiting
#50 - How to Manipulate CNNs to Make Them Lie: the GradCAM Case
Issue -
State: closed - Opened by hbaniecki almost 2 years ago
Labels: awaiting
#49 - Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Issue -
State: closed - Opened by hbaniecki almost 2 years ago
Labels: awaiting
#49 - Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Issue -
State: closed - Opened by hbaniecki almost 2 years ago
Labels: awaiting
#48 - Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks
Issue -
State: closed - Opened by hbaniecki almost 2 years ago
Labels: awaiting
#48 - Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks
Issue -
State: closed - Opened by hbaniecki almost 2 years ago
Labels: awaiting
#47 - Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks
Issue -
State: closed - Opened by hbaniecki almost 2 years ago
Labels: preprint
#47 - Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks
Issue -
State: closed - Opened by hbaniecki almost 2 years ago
Labels: preprint
#46 - Certifiably robust interpretation via Rényi differential privacy
Issue -
State: closed - Opened by hbaniecki about 2 years ago
Labels: awaiting
#46 - Certifiably robust interpretation via Rényi differential privacy
Issue -
State: closed - Opened by hbaniecki about 2 years ago
Labels: awaiting
#45 - Disguising Attacks with Explanation-Aware Backdoors
Issue -
State: closed - Opened by hbaniecki about 2 years ago
Labels: awaiting
#45 - Disguising Attacks with Explanation-Aware Backdoors
Issue -
State: closed - Opened by hbaniecki about 2 years ago
Labels: awaiting
#44 - On the robustness of sparse counterfactual explanations to adverse perturbations
Issue -
State: closed - Opened by hbaniecki about 2 years ago
Labels: awaiting
#44 - On the robustness of sparse counterfactual explanations to adverse perturbations
Issue -
State: closed - Opened by hbaniecki about 2 years ago
Labels: awaiting
#43 - Preventing deception with explanation methods using focused sampling
Issue -
State: closed - Opened by hbaniecki about 2 years ago
- 1 comment
Labels: awaiting
#43 - Preventing deception with explanation methods using focused sampling
Issue -
State: closed - Opened by hbaniecki about 2 years ago
- 1 comment
Labels: awaiting
#42 - Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Issue -
State: closed - Opened by hbaniecki over 2 years ago
Labels: awaiting
#42 - Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Issue -
State: closed - Opened by hbaniecki over 2 years ago
Labels: awaiting
#40 - OpenXAI: Towards a Transparent Evaluation of Model Explanations
Issue -
State: closed - Opened by hbaniecki over 2 years ago
- 1 comment
Labels: awaiting
#40 - OpenXAI: Towards a Transparent Evaluation of Model Explanations
Issue -
State: closed - Opened by hbaniecki over 2 years ago
- 1 comment
Labels: awaiting
#39 - Fooling SHAP with Stealthily Biased Sampling
Issue -
State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint
#39 - Fooling SHAP with Stealthily Biased Sampling
Issue -
State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint
#38 - Unfooling Perturbation-Based Post Hoc Explainers
Issue -
State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint
#38 - Unfooling Perturbation-Based Post Hoc Explainers
Issue -
State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint
#37 - Attribution-based Explanations that Provide Recourse Cannot be Robust
Issue -
State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint
#36 - Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
Issue -
State: closed - Opened by hbaniecki over 2 years ago
Labels: awaiting
#36 - Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations
Issue -
State: closed - Opened by hbaniecki over 2 years ago
Labels: awaiting
#35 - Benchmarking Perturbation-based Saliency Maps for Explaining Atari Agents
Issue -
State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint
#35 - Benchmarking Perturbation-based Saliency Maps for Explaining Atari Agents
Issue -
State: closed - Opened by hbaniecki over 2 years ago
Labels: preprint
#34 - Backdooring Explainable Machine Learning
Issue -
State: closed - Opened by hbaniecki almost 3 years ago
Labels: preprint
#34 - Backdooring Explainable Machine Learning
Issue -
State: closed - Opened by hbaniecki almost 3 years ago
Labels: preprint
#33 - What Do You See?: Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Issue -
State: closed - Opened by hbaniecki almost 3 years ago
Labels: awaiting
#33 - What Do You See?: Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Issue -
State: closed - Opened by hbaniecki almost 3 years ago
Labels: awaiting
#32 - An Adversarial Approach for Explaining the Predictions of Deep Neural Networks
Issue -
State: closed - Opened by hbaniecki about 3 years ago
Labels: awaiting
#31 - From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Issue -
State: closed - Opened by hbaniecki about 3 years ago
Labels: preprint
#31 - From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Issue -
State: closed - Opened by hbaniecki about 3 years ago
Labels: preprint
#30 - BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
Issue -
State: closed - Opened by hbaniecki about 3 years ago
#30 - BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
Issue -
State: closed - Opened by hbaniecki about 3 years ago
#29 - Defense Against Explanation Manipulation
Issue -
State: closed - Opened by hbaniecki about 3 years ago
- 1 comment
Labels: preprint
#29 - Defense Against Explanation Manipulation
Issue -
State: closed - Opened by hbaniecki about 3 years ago
- 1 comment
Labels: preprint
#28 - CLEVR-XAI: A benchmark dataset for the ground truth evaluation of neural network explanations
Issue -
State: closed - Opened by hbaniecki about 3 years ago
Labels: awaiting
#28 - CLEVR-XAI: A benchmark dataset for the ground truth evaluation of neural network explanations
Issue -
State: closed - Opened by hbaniecki about 3 years ago
Labels: awaiting
#27 - ICLR 22
Issue -
State: closed - Opened by hbaniecki about 3 years ago
Labels: conference
#27 - ICLR 22
Issue -
State: closed - Opened by hbaniecki about 3 years ago
Labels: conference
#26 - On Guaranteed Optimal Robust Explanations for NLP Models
Issue -
State: closed - Opened by hbaniecki about 3 years ago
#26 - On Guaranteed Optimal Robust Explanations for NLP Models
Issue -
State: closed - Opened by hbaniecki about 3 years ago
#25 - Robust and Stable Black Box Explanations
Issue -
State: closed - Opened by hbaniecki about 3 years ago
#25 - Robust and Stable Black Box Explanations
Issue -
State: closed - Opened by hbaniecki about 3 years ago
#24 - Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing
Issue -
State: closed - Opened by hbaniecki about 3 years ago
#24 - Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing
Issue -
State: closed - Opened by hbaniecki about 3 years ago
#23 - A Survey on the Robustness of Feature Importance and Counterfactual Explanations
Issue -
State: closed - Opened by hbaniecki about 3 years ago
#23 - A Survey on the Robustness of Feature Importance and Counterfactual Explanations
Issue -
State: closed - Opened by hbaniecki about 3 years ago
#22 - Robust Attribution Regularization
Issue -
State: closed - Opened by hbaniecki about 3 years ago
#21 - Crowdsourcing and Evaluating Concept-driven Explanations of Machine Learning Models
Issue -
State: closed - Opened by hbaniecki about 3 years ago
Labels: awaiting
#21 - Crowdsourcing and Evaluating Concept-driven Explanations of Machine Learning Models
Issue -
State: closed - Opened by hbaniecki about 3 years ago
Labels: awaiting
#20 - NeurIPS 21
Issue -
State: closed - Opened by hbaniecki over 3 years ago
Labels: conference
#19 - Towards robust explanations for deep neural networks
Issue -
State: closed - Opened by hbaniecki over 3 years ago
#18 - Brittle interpretations: The Vulnerability of TCAV and Other Concept-based Explainability Tools to Adversarial Attack
Issue -
State: closed - Opened by hbaniecki over 3 years ago
Labels: preprint
#17 - Manipulating and Measuring Model Interpretability
Issue -
State: closed - Opened by hbaniecki over 3 years ago
#16 - FAccT '21
Issue -
State: closed - Opened by hbaniecki over 3 years ago
#15 - When and How to Fool Explainable Models (and Humans) with Adversarial Examples
Issue -
State: closed - Opened by hbaniecki over 3 years ago
Labels: preprint
#14 - Counterfactual Explanations Can Be Manipulated
Issue -
State: closed - Opened by hbaniecki over 3 years ago
- 1 comment
#13 - ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI
Issue -
State: closed - Opened by hbaniecki over 3 years ago
#12 - Adversarial Attacks and Defenses: An Interpretation Perspective
Issue -
State: closed - Opened by hbaniecki over 3 years ago
#11 - Do Feature Attribution Methods Correctly Attribute Features?
Issue -
State: closed - Opened by hbaniecki over 3 years ago
#10 - ICML 2021
Issue -
State: closed - Opened by hbaniecki almost 4 years ago
#9 - Explainable AI for Inspecting Adversarial Attacks on Deep Neural Networks
Issue -
State: closed - Opened by hbaniecki about 4 years ago
#8 - Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
Issue -
State: closed - Opened by hbaniecki about 4 years ago
#7 - Can We Trust Your Explanations? Sanity Checks for Interpreters in Android Malware Analysis
Issue -
State: closed - Opened by hbaniecki about 4 years ago
#6 - Black Box Attacks on Explainable Artificial Intelligence(XAI) methods in Cyber Security
Issue -
State: closed - Opened by hbaniecki about 4 years ago
#5 - AAAI 2021
Issue -
State: closed - Opened by hbaniecki over 4 years ago
#4 - NeurIPS 2020
Issue -
State: closed - Opened by hbaniecki over 4 years ago
- 2 comments