<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Jan-Nico Zaech</title>
	<atom:link href="/feed/" rel="self" type="application/rss+xml" />
	<link>/</link>
	<description></description>
	<lastBuildDate>Tue, 11 Jun 2024 09:47:16 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.5.4</generator>
	<item>
		<title>Quantum Computer Vision and Machine Learning @ ECCV 2024</title>
		<link>/qcvml-eccv-2024/</link>
		
		<dc:creator><![CDATA[zaech]]></dc:creator>
		<pubDate>Tue, 11 Jun 2024 08:35:11 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">/?p=284</guid>

					<description><![CDATA[The second QCVML workshop has been accepted at ECCV 2024 after a successful session at CVPR 2023.]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="1024" height="536" src="/wp-content/uploads/2024/06/qcvml2024-1024x536.jpg" alt="" class="wp-image-285" srcset="/wp-content/uploads/2024/06/qcvml2024-1024x536.jpg 1024w, /wp-content/uploads/2024/06/qcvml2024-300x157.jpg 300w, /wp-content/uploads/2024/06/qcvml2024-768x402.jpg 768w, /wp-content/uploads/2024/06/qcvml2024-1536x805.jpg 1536w, /wp-content/uploads/2024/06/qcvml2024.jpg 1655w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>The second QCVML workshop has been accepted at ECCV 2024 after a successful session at CVPR 2023.</p>



<p>I am happy to be part of the organization team with the goal to promote the exciting field of quantum computing to computer vision, as well as provide a platform for researchers interested in this area to connect and exchange ideas. Get ready for half a day of tutorials, invited talks, and a poster session that highlights early work in the field.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Probabilistic Sampling of Balanced K-Means using Adiabatic Quantum Computing</title>
		<link>/probabilistic-sampling-of-balanced-k-means-using-adiabatic-quantum-computing/</link>
		
		<dc:creator><![CDATA[zaech]]></dc:creator>
		<pubDate>Sat, 01 Jun 2024 00:00:06 +0000</pubDate>
				<category><![CDATA[Publications]]></category>
		<category><![CDATA[Computer Vision]]></category>
		<category><![CDATA[Quantum Computing]]></category>
		<guid isPermaLink="false">/?p=256</guid>

					<description><![CDATA[Jan-Nico Zaech, Martin Danelljan, Tolga Birdal, Luc Van Gool IEEE Conference on Computer Vision and Pattern Recognition 2024 (CVPR) Abstract Adiabatic quantum computing (AQC) is a promising approach for discrete and often NP-hard optimization problems. Current AQCs allow to implement problems of research interest, which has sparked the development of quantum representations for many computer [&#8230;]]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-full is-resized"><img decoding="async" width="1071" height="315" src="/wp-content/uploads/2023/11/teaser.png" alt="" class="wp-image-258" style="width:1242px;height:auto" srcset="/wp-content/uploads/2023/11/teaser.png 1071w, /wp-content/uploads/2023/11/teaser-300x88.png 300w, /wp-content/uploads/2023/11/teaser-1024x301.png 1024w, /wp-content/uploads/2023/11/teaser-768x226.png 768w" sizes="(max-width: 1071px) 100vw, 1071px" /></figure>



<p>Jan-Nico Zaech, Martin Danelljan, Tolga Birdal, Luc Van Gool</p>



<p>IEEE Conference on Computer Vision and Pattern Recognition 2024 (CVPR)</p>



<h3 class="wp-block-heading">Abstract</h3>



<p>Adiabatic quantum computing (AQC) is a promising approach for discrete and often NP-hard optimization problems. Current AQCs allow to implement problems of research interest, which has sparked the development of quantum representations for many computer vision tasks. Despite requiring multiple measurements from the noisy AQC, current approaches only utilize the best measurement, discarding information contained in the remaining ones. In this work, we explore the potential of using this information for probabilistic balanced k-means clustering. Instead of discarding non-optimal solutions, we propose to use them to compute calibrated posterior probabilities with little additional compute cost. This allows us to identify ambiguous solutions and data points, which we demonstrate on a D-Wave AQC on synthetic tasks and real visual data.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>PhD Defense</title>
		<link>/phd-defense/</link>
		
		<dc:creator><![CDATA[zaech]]></dc:creator>
		<pubDate>Thu, 14 Dec 2023 20:00:46 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">/?p=270</guid>

					<description><![CDATA[I successfully defended my PhD on Vision for Autonomous Systems: From Tracking and Prediction to Quantum Computing on December 14th.]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-columns is-not-stacked-on-mobile is-layout-flex wp-container-core-columns-is-layout-1 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:853px">
<figure class="wp-block-image size-large is-resized"><img decoding="async" width="1024" height="576" src="/wp-content/uploads/2024/02/defense_title-1024x576.png" alt="" class="wp-image-275" style="width:853px;height:auto" srcset="/wp-content/uploads/2024/02/defense_title-1024x576.png 1024w, /wp-content/uploads/2024/02/defense_title-300x169.png 300w, /wp-content/uploads/2024/02/defense_title-768x432.png 768w, /wp-content/uploads/2024/02/defense_title-1536x863.png 1536w, /wp-content/uploads/2024/02/defense_title.png 1918w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>



<div class="wp-block-column is-layout-constrained wp-block-column-is-layout-constrained">
<figure class="wp-block-image alignright size-large is-resized"><img loading="lazy" decoding="async" width="769" height="1024" src="/wp-content/uploads/2024/02/PXL_20231214_1907196513-769x1024.jpg" alt="" class="wp-image-274" style="width:auto;height:480px" srcset="/wp-content/uploads/2024/02/PXL_20231214_1907196513-769x1024.jpg 769w, /wp-content/uploads/2024/02/PXL_20231214_1907196513-225x300.jpg 225w, /wp-content/uploads/2024/02/PXL_20231214_1907196513-768x1023.jpg 768w, /wp-content/uploads/2024/02/PXL_20231214_1907196513.jpg 1000w" sizes="(max-width: 769px) 100vw, 769px" /></figure>
</div>
</div>



<p>I successfully defended my PhD on <em>Vision for Autonomous Systems: From Tracking and Prediction to Quantum Computing</em> on December 14th. </p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Invited Talk @ INSAIT</title>
		<link>/invited-talk-insait/</link>
		
		<dc:creator><![CDATA[zaech]]></dc:creator>
		<pubDate>Tue, 28 Nov 2023 13:35:56 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">/?p=263</guid>

					<description><![CDATA[I have been invited to give a talk on "Vision for Autonomous Systems: from Tracking and Prediction to Quantum Computing" at INSAIT.]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="579" height="272" src="/wp-content/uploads/2023/11/teaser-1.png" alt="" class="wp-image-264" srcset="/wp-content/uploads/2023/11/teaser-1.png 579w, /wp-content/uploads/2023/11/teaser-1-300x141.png 300w" sizes="(max-width: 579px) 100vw, 579px" /></figure>



<p>I have been invited to give a talk on &#8220;Vision for Autonomous Systems: from Tracking and Prediction to Quantum Computing&#8221; at INSAIT – Institute for Computer Science, Artificial Intelligence and Technology.</p>



<p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Paper accepted @ WACV 2024</title>
		<link>/paper-accepted-wacv-2024/</link>
		
		<dc:creator><![CDATA[zaech]]></dc:creator>
		<pubDate>Tue, 24 Oct 2023 13:41:08 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<guid isPermaLink="false">/?p=266</guid>

					<description><![CDATA[Our joint student project on "Optimizing Long-Term Robot Tracking with Multi-Platform Sensor Fusion" has been accepted for publication at WACV 2024.]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="1024" height="243" src="/wp-content/uploads/2023/11/teaser-2.png" alt="" class="wp-image-267" style="width:632px;height:auto" srcset="/wp-content/uploads/2023/11/teaser-2.png 1024w, /wp-content/uploads/2023/11/teaser-2-300x71.png 300w, /wp-content/uploads/2023/11/teaser-2-768x182.png 768w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Our joint student project on &#8220;Optimizing Long-Term Robot Tracking with Multi-Platform Sensor Fusion&#8221; has been accepted for publication at WACV 2024 and marks the first ETH RoboCup team NomadZ paper accepted at a full conference. </p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Quantum Computer Vision and Machine Learning @ CVPR 2023</title>
		<link>/quantum-computer-vision-and-machine-learning-cvpr-2023/</link>
					<comments>/quantum-computer-vision-and-machine-learning-cvpr-2023/#respond</comments>
		
		<dc:creator><![CDATA[zaech]]></dc:creator>
		<pubDate>Thu, 08 Jun 2023 12:50:48 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[Quantum Computing]]></category>
		<guid isPermaLink="false">/?p=54</guid>

					<description><![CDATA[I am an organizer of the workshop on Quantum Computer Vision and Machine Learning at CVPR 2023 in Vancouver, Canada.]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large is-resized"><img loading="lazy" decoding="async" src="/wp-content/uploads/2023/06/teaser-1024x561.jpg" alt="" class="wp-image-55" width="1238" height="678" srcset="/wp-content/uploads/2023/06/teaser-1024x561.jpg 1024w, /wp-content/uploads/2023/06/teaser-300x164.jpg 300w, /wp-content/uploads/2023/06/teaser-768x421.jpg 768w, /wp-content/uploads/2023/06/teaser.jpg 1202w" sizes="(max-width: 1238px) 100vw, 1238px" /></figure>



<p>I am an organizer of the workshop on Quantum Computer Vision and Machine Learning at CVPR 2023 in Vancouver, Canada.</p>



<p>Our goal is to introduce and promote the exciting field of quantum computing to computer vision, as well as provide a platform for researchers interested in this area to connect and exchange ideas. Get ready for half a day of tutorials, invited talks, and a poster session that highlights early work in the field.</p>
]]></content:encoded>
					
					<wfw:commentRss>/quantum-computer-vision-and-machine-learning-cvpr-2023/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Optimizing Long-Term Player Tracking and Identification in NAO Robot Soccer by fusing Game-state and External Video</title>
		<link>/optimizing-long-term-player-tracking-and-identification-in-nao-robot-soccer-by-fusing-game-state-and-external-video/</link>
		
		<dc:creator><![CDATA[zaech]]></dc:creator>
		<pubDate>Fri, 02 Jun 2023 15:45:51 +0000</pubDate>
				<category><![CDATA[Publications]]></category>
		<category><![CDATA[RoboCup]]></category>
		<category><![CDATA[Tracking]]></category>
		<guid isPermaLink="false">/?p=83</guid>

					<description><![CDATA[A collaborative sensing approach for multi object tracking of robots.]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-2 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:53%">
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="505" src="/wp-content/uploads/2023/06/method-1-1024x505.jpg" alt="" class="wp-image-84" srcset="/wp-content/uploads/2023/06/method-1-1024x505.jpg 1024w, /wp-content/uploads/2023/06/method-1-300x148.jpg 300w, /wp-content/uploads/2023/06/method-1-768x379.jpg 768w, /wp-content/uploads/2023/06/method-1-1536x757.jpg 1536w, /wp-content/uploads/2023/06/method-1.jpg 1588w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>



<div class="wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow">
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="/wp-content/uploads/2023/06/field-edited-1024x576.jpg" alt="" class="wp-image-41" srcset="/wp-content/uploads/2023/06/field-edited-1024x576.jpg 1024w, /wp-content/uploads/2023/06/field-edited-300x169.jpg 300w, /wp-content/uploads/2023/06/field-edited-768x432.jpg 768w, /wp-content/uploads/2023/06/field-edited.jpg 1307w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>
</div>



<p>Giuliano Albanese*, Arka Mitra*, Jan-Nico Zaech*, Yupeng Zhao*, Ajad Chhatkuli, and Luc Van Gool</p>



<p>International Conference on Robotics and Automation Workshops, ICRA 2023 (<a href="https://coperception.github.io/index.html">CoPerception: Collaborative Perception and Learning</a>)</p>



<h3 class="wp-block-heading">Abstract</h3>



<p>Monitoring a fleet of robots requires stable long-term tracking with re-identification, which is yet an unsolved challenge in many scenarios. One application of this is the analysis of autonomous robotic soccer games at RoboCup. Tracking these games requires the handling of identically looking players, strong occlusions, and non-professional video recordings, but also offers state information estimated by the robots. In order to make effective use of the information coming from the robot sensors, we propose a robust tracking and identification<br>pipeline. It fuses external non-calibrated camera data with the robots’ internal states using quadratic optimization for tracklet matching. The approach is validated using game recordings from previous RoboCup World Cups.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>NomadZ @ ICRA 2023</title>
		<link>/nomadz-icra-2023/</link>
		
		<dc:creator><![CDATA[zaech]]></dc:creator>
		<pubDate>Fri, 02 Jun 2023 13:00:00 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[RoboCup]]></category>
		<guid isPermaLink="false">/?p=38</guid>

					<description><![CDATA[The first NomadZ paper has been accepted in the first workshop on Collaborative Perception and Learning to be held at ICRA 2023!]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-3 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow">
<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1307" height="735" src="/wp-content/uploads/2023/06/field-edited.jpg" alt="" class="wp-image-41" srcset="/wp-content/uploads/2023/06/field-edited.jpg 1307w, /wp-content/uploads/2023/06/field-edited-300x169.jpg 300w, /wp-content/uploads/2023/06/field-edited-1024x576.jpg 1024w, /wp-content/uploads/2023/06/field-edited-768x432.jpg 768w" sizes="(max-width: 1307px) 100vw, 1307px" /></figure>
</div>



<div class="wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:53.5%">
<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="505" src="/wp-content/uploads/2023/06/method-1024x505.jpg" alt="" class="wp-image-39" srcset="/wp-content/uploads/2023/06/method-1024x505.jpg 1024w, /wp-content/uploads/2023/06/method-300x148.jpg 300w, /wp-content/uploads/2023/06/method-768x379.jpg 768w, /wp-content/uploads/2023/06/method-1536x757.jpg 1536w, /wp-content/uploads/2023/06/method.jpg 1588w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>
</div>



<p>The NomadZ paper titled “Optimizing Long-Term Player Tracking and  Identification in NAO Robot Soccer by fusing Game-state and External  Video” has been accepted in the first workshop on <a href="https://coperception.github.io/index.html">Collaborative Perception and Learning</a> to be held at ICRA 2023 in London!</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Adiabatic Quantum Computing for Multi Object Tracking</title>
		<link>/adiabatic-quantum-computing-for-multi-object-tracking/</link>
		
		<dc:creator><![CDATA[zaech]]></dc:creator>
		<pubDate>Sun, 19 Jun 2022 00:00:18 +0000</pubDate>
				<category><![CDATA[Publications]]></category>
		<category><![CDATA[Quantum Computing]]></category>
		<category><![CDATA[Tracking]]></category>
		<guid isPermaLink="false">/?p=74</guid>

					<description><![CDATA[A Multi-Object Tracking algorithm that can be solved with Adiabatic Quantum Computing]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large is-resized"><img loading="lazy" decoding="async" src="/wp-content/uploads/2023/06/teaser-1-1024x576.jpg" alt="" class="wp-image-75" width="1239" height="697" srcset="/wp-content/uploads/2023/06/teaser-1-1024x576.jpg 1024w, /wp-content/uploads/2023/06/teaser-1-300x169.jpg 300w, /wp-content/uploads/2023/06/teaser-1-768x432.jpg 768w, /wp-content/uploads/2023/06/teaser-1-1536x863.jpg 1536w, /wp-content/uploads/2023/06/teaser-1.jpg 1612w" sizes="(max-width: 1239px) 100vw, 1239px" /></figure>



<p>Jan-Nico Zaech, Alexander Liniger, Martin Danelljan, Dengxin Dai, Luc Van Gool</p>



<p><em>Conference on Computer Vision and Pattern Recognition, CVPR 2022</em></p>



<h3 class="wp-block-heading">Abstract</h3>



<p>Multi-Object Tracking (MOT) is most often approached in the tracking-by-detection paradigm, where object detections are associated through time. The association step naturally leads to discrete optimization problems. As these optimization problems are often NP-hard, they can only be solved exactly for small instances on current hardware. Adiabatic quantum computing (AQC) offers a solution for this, as it has the potential to provide a considerable speedup on a range of NP-hard optimization problems in the near future. However, current MOT formulations are unsuitable for quantum computing due to their scaling properties. In this work, we therefore propose the first MOT formulation designed to be solved with AQC. We employ an Ising model that represents the quantum mechanical system implemented on the AQC. We show that our approach is competitive compared with state-of-the-art optimization-based approaches, even when using of-the-shelf integer programming solvers. Finally, we demonstrate that our MOT problem is already solvable on the current generation of real quantum computers for small examples, and analyze the properties of the measured solutions.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Learnable Online Graph Representations for 3D Multi-Object Tracking</title>
		<link>/learnable-online-graph-representations-for-3d-multi-object-tracking/</link>
		
		<dc:creator><![CDATA[zaech]]></dc:creator>
		<pubDate>Mon, 23 May 2022 16:41:28 +0000</pubDate>
				<category><![CDATA[Publications]]></category>
		<category><![CDATA[Autonomous Systems]]></category>
		<category><![CDATA[Tracking]]></category>
		<guid isPermaLink="false">/?p=109</guid>

					<description><![CDATA[An online 3D Multi-Object Tracking method based on graph neural networks.]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="575" src="/wp-content/uploads/2023/06/teaser-3-1024x575.jpg" alt="" class="wp-image-111" srcset="/wp-content/uploads/2023/06/teaser-3-1024x575.jpg 1024w, /wp-content/uploads/2023/06/teaser-3-300x168.jpg 300w, /wp-content/uploads/2023/06/teaser-3-768x431.jpg 768w, /wp-content/uploads/2023/06/teaser-3-1536x862.jpg 1536w, /wp-content/uploads/2023/06/teaser-3.jpg 1916w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Jan-Nico Zaech, Dengxin Dai, Alexander Liniger, Martin Danelljan, Luc Van Gool</p>



<p><em>International Conference on Robotics and Automation Workshops, ICRA 2022</em></p>



<h3 class="wp-block-heading">Abstract</h3>



<p>Tracking of objects in 3D is a fundamental task in computer vision that finds use in a wide range of applications<br>such as autonomous driving, robotics or augmented reality. Most recent approaches for 3D multi object tracking (MOT) from LIDAR use object dynamics together with a set of handcrafted features to match detections of objects. However, manually designing such features and heuristics is cumbersome and often leads to suboptimal performance. In this work, we instead strive towards a unified and learning based approach to the 3D MOT problem. We design a graph structure to jointly process detection and track states in an online manner. To this end, we employ a Neural Message Passing network for data association that is fully trainable. Our approach provides a natural way for track initialization and handling of false positive detections, while significantly improving track stability. We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
