<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Domain Adaptation &#8211; Jan-Nico Zaech</title>
	<atom:link href="/tag/domain-adaptation/feed/" rel="self" type="application/rss+xml" />
	<link>/</link>
	<description></description>
	<lastBuildDate>Thu, 13 Jul 2023 16:50:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.5.4</generator>
	<item>
		<title>Unsupervised robust domain adaptation without source data</title>
		<link>/unsupervised-robust-domain-adaptation-without-source-data/</link>
		
		<dc:creator><![CDATA[zaech]]></dc:creator>
		<pubDate>Tue, 04 Jan 2022 20:39:34 +0000</pubDate>
				<category><![CDATA[Publications]]></category>
		<category><![CDATA[Domain Adaptation]]></category>
		<guid isPermaLink="false">/?p=123</guid>

					<description><![CDATA[A method that keeps a network robust against adversarial images during source-free domain adaptation.]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-large"><img fetchpriority="high" decoding="async" width="1024" height="576" src="/wp-content/uploads/2023/06/teaser-4-1024x576.jpg" alt="" class="wp-image-124" srcset="/wp-content/uploads/2023/06/teaser-4-1024x576.jpg 1024w, /wp-content/uploads/2023/06/teaser-4-300x169.jpg 300w, /wp-content/uploads/2023/06/teaser-4-768x432.jpg 768w, /wp-content/uploads/2023/06/teaser-4-1536x864.jpg 1536w, /wp-content/uploads/2023/06/teaser-4.jpg 1920w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Peshal Agarwal, Danda Pani Paudel, Jan-Nico Zaech, Luc Van Gool</p>



<p><em>Winter Conference on Applications of Computer Vision, WACV 2022</em></p>



<h3 class="wp-block-heading">Abstract</h3>



<p>We study the problem of robust domain adaptation in the context of  unavailable target labels and source data. The considered robustness is  against adversarial perturbations. This paper aims at answering the  question of finding the right strategy to make the target model robust  and accurate in the setting of unsupervised domain adaptation without  source data. The major findings of this paper are:(i) robust source  models can be transferred robustly to the target;(ii) robust domain  adaptation can greatly benefit from non-robust pseudo-labels and the  pair-wise contrastive loss. The proposed method of using non-robust  pseudo-labels performs surprisingly well on both clean and adversarial  samples, for the task of image classification. We show a consistent  performance improvement of over 10% in accuracy against the tested  baselines on four benchmark datasets.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Texture Underfitting for Domain Adaptation</title>
		<link>/texture-underfitting-for-domain-adaptation/</link>
		
		<dc:creator><![CDATA[zaech]]></dc:creator>
		<pubDate>Mon, 28 Oct 2019 19:05:45 +0000</pubDate>
				<category><![CDATA[Publications]]></category>
		<category><![CDATA[Domain Adaptation]]></category>
		<guid isPermaLink="false">/?p=150</guid>

					<description><![CDATA[A method to use image structure in domain adaptation implemented as a two stage training process.]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-large"><img decoding="async" width="1024" height="576" src="/wp-content/uploads/2023/06/teaser-8-1024x576.jpg" alt="" class="wp-image-151" srcset="/wp-content/uploads/2023/06/teaser-8-1024x576.jpg 1024w, /wp-content/uploads/2023/06/teaser-8-300x169.jpg 300w, /wp-content/uploads/2023/06/teaser-8-768x432.jpg 768w, /wp-content/uploads/2023/06/teaser-8-1536x864.jpg 1536w, /wp-content/uploads/2023/06/teaser-8.jpg 1920w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Jan-Nico Zaech, Dengxin Dai, Martin Hahner, Luc Van Gool</p>



<p><em>Inteligent Transportation Systems Conference (IEEE), ITSC 2019</em></p>



<h3 class="wp-block-heading">Abstract</h3>



<p>Comprehensive semantic segmentation is one of the key components for robust scene understanding and a<br>requirement to enable autonomous driving. Driven by large scale datasets, convolutional neural networks show impressive results on this task. However, a segmentation algorithm generalizing to various scenes and conditions would require an enormously diverse dataset, making the labour intensive data acquisition and labeling process prohibitively expensive. Under the assumption of structural similarities between segmentation<br>maps, domain adaptation promises to resolve this challenge by transferring knowledge from existing, potentially simulated datasets to new environments where no supervision exists. While the performance of this approach is contingent on the concept that neural networks learn a high level understanding of scene structure, recent work suggests that neural networks are biased towards overfitting to texture instead of learning structural and shape information. Considering the ideas underlying semantic segmentation, we employ random image stylization to augment the training dataset and propose a training procedure that facilitates texture underfitting to improve the performance of domain adaptation. In experiments with supervised as well as unsupervised methods for the task of synthetic-to-real domain adaptation, we show that our approach outperforms conventional training methods.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
