<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Model Bias Detection | Patrick Koller</title><link>https://patch0816.github.io/tag/model-bias-detection/</link><atom:link href="https://patch0816.github.io/tag/model-bias-detection/index.xml" rel="self" type="application/rss+xml"/><description>Model Bias Detection</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 01 Sep 2025 00:00:00 +0000</lastBuildDate><item><title>Caption-Driven Explainability: Probing CNNs for Bias via CLIP</title><link>https://patch0816.github.io/publication/2023-caption-based-xai/</link><pubDate>Mon, 01 Sep 2025 00:00:00 +0000</pubDate><guid>https://patch0816.github.io/publication/2023-caption-based-xai/</guid><description>&lt;!--
&lt;div class="alert alert-note">
&lt;div>
Click the &lt;em>Cite&lt;/em> button above to demo the feature to enable visitors to import publication metadata into their reference management software.
&lt;/div>
&lt;/div>
&lt;div class="alert alert-note">
&lt;div>
Create your slides in Markdown - click the &lt;em>Slides&lt;/em> button to check out the example.
&lt;/div>
&lt;/div>
Add the publication's **full text** or **supplementary notes** here. You can use rich formatting such as including [code, math, and images](https://docs.hugoblox.com/content/writing-markdown-latex/). -->
&lt;p>&lt;strong>Patrick Koller&lt;/strong>, &lt;a href="https://avdravid.github.io/" target="_blank" rel="noopener">Amil V. Dravid&lt;/a>, &lt;a href="https://www.barnes-schuster.ch/" target="_blank" rel="noopener">Prof. Dr. Guido Schuster&lt;/a>, and &lt;a href="https://www.mccormick.northwestern.edu/research-faculty/directory/profiles/katsaggelos-aggelos.html" target="_blank" rel="noopener">Prof. Dr. Aggelos Katsaggelos&lt;/a>&lt;/p>
&lt;p>Accepted and presented at the &lt;a href="https://cmsworkshops.com/ICIP2025/view_paper.php?PaperNum=3069#top" target="_blank" rel="noopener">IEEE ICIP 2025 Satellite Workshop&lt;/a>: &amp;ldquo;Generative AI for World Simulations and Communications &amp;amp; Celebrating 40 Years of Excellence in Education: Honoring Prof. Aggelos Katsaggelos,&amp;rdquo; Anchorage, Alaska, USA, Sept 14, 2025.&lt;/p>
&lt;h3 id="bibtex-citation">BibTeX Citation&lt;/h3>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-bibtex" data-lang="bibtex">&lt;span class="line">&lt;span class="cl">&lt;span class="nc">@INPROCEEDINGS&lt;/span>&lt;span class="p">{&lt;/span>&lt;span class="nl">Koller_2025_CaptionXAI&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">author&lt;/span>&lt;span class="p">=&lt;/span>&lt;span class="s">{Koller, Patrick and Dravid, Amil V. and Schuster, Guido M. and Katsaggelos, Aggelos K.}&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">booktitle&lt;/span>&lt;span class="p">=&lt;/span>&lt;span class="s">{2025 IEEE International Conference on Image Processing Workshops (ICIPW)}&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">title&lt;/span>&lt;span class="p">=&lt;/span>&lt;span class="s">{Caption-Driven Explainability: Probing CNNS for Bias Via Clip}&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">year&lt;/span>&lt;span class="p">=&lt;/span>&lt;span class="s">{2025}&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">volume&lt;/span>&lt;span class="p">=&lt;/span>&lt;span class="s">{}&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">number&lt;/span>&lt;span class="p">=&lt;/span>&lt;span class="s">{}&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">pages&lt;/span>&lt;span class="p">=&lt;/span>&lt;span class="s">{663-667}&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">keywords&lt;/span>&lt;span class="p">=&lt;/span>&lt;span class="s">{Explainable AI;Computational modeling;Machine vision;Zero shot learning;Surgery;Medical services;Debugging;Predictive models;Robustness;Convolutional neural networks;Multi-Modal Explainability;CLIP;Model Bias Detection;Zero-Shot Learning;Network Surgery}&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="na">doi&lt;/span>&lt;span class="p">=&lt;/span>&lt;span class="s">{10.1109/ICIPW68931.2025.11386015}&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="p">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div></description></item></channel></rss>