Skip to content

Commit

Permalink
Update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
jimbz committed Feb 6, 2014
1 parent d912243 commit ed27a52
Show file tree
Hide file tree
Showing 11 changed files with 24 additions and 24 deletions.
12 changes: 6 additions & 6 deletions author/jim-braux-zin.html
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ <h1 itemprop="name">
<br/><br/>
I am expected to defend my thesis in <strong>june 2014</strong>. I am mainly looking for exciting Computer Vision projects but I am open to any challenge, I would especially like to try my hand at some machine learning or big data applications. A full resume is available in <a href="http://www.braux-zin.com/pdf/brauxzin_resume.pdf">English</a> or in <a href="http://www.braux-zin.com/pdf/brauxzin_cv.pdf">French</a>.</p>
<h1 id="education">Education</h1>
<p>I enrolled in a <em>double-degree</em> Master's degree at <a href="http://www.supelec.fr/374_p_14603/welcome.html">Supélec</a> (Paris, France) and <a href="http://www.kth.se/en">The Royal Institute of Technology (KTH)</a> (Stockholm, Sweden). I majored in digital communications and signal processing with minors in robotics and computer vision. I was deeply implicated in student associative projects such as <a href="http://enactus.org/">Enactus</a>.</p>
<p>I enrolled in a <em>double-degree</em> Master's degree at <a href="http://www.supelec.fr/374_p_14603/welcome.html">Supélec</a> (Paris, France) and <a href="http://www.kth.se/en">The Royal Institute of Technology (KTH)</a> (Stockholm, Sweden). I majored in digital communications and signal processing with minors in robotics and computer vision. I was deeply involved in student associative projects such as <a href="http://enactus.org/">Enactus</a>.</p>
<section class="skills-section">
<h1>Scientific skills</h1>

Expand Down Expand Up @@ -130,7 +130,7 @@ <h2>2D Computer Vision</h2>
<h2>Continuous Optimization</h2>
<ul>
<li>Convex optimization (Gauss&#8209;Newton, Levenberg&#8209;Marquardt)</li>
<li>Total Variation regularization</li>
<li>Total (Generalized) Variation regularization</li>
<li>Global optimization by Particle Swarm Optimization</li>
</ul>
</div>
Expand Down Expand Up @@ -180,7 +180,7 @@ <h1 id="optical-see-through-augmented-reality">Optical-See-Through Augmented Rea
Augmented Reality could be of great help for critical applications such as driving or surgery assistance. However in these cases every millisecond counts and the user cannot afford to add any latency to reality. This dismisses all <em>video see-through</em> solutions for <em>optical see-through</em> ones, where virtual augmentations are layered onto the reality thanks to a semi-transparent display. This adds new constraints on the system for proper alignment with reality. We focused on a tablet-like system composed of a transparent LCD screen and two localization devices (one to compute the pose relative to the environment and the other to locate the user). We believe this kind of systems would be more practical to the user (well-delimited window, no heavy head-mounted device) and the designer (less constraint on the weight, slower motion) relative to current head-mounted displays.</p>
<h1 id="combining-direct-and-feature-based-costs-for-optical-flow-and-stereovision">Combining Direct and Feature-Based Costs for Optical Flow and Stereovision</h1>
<p>The estimation of a dense motion field (optical flow) is a very important building block for many computer vision tasks such as 3d reconstruction. We introduce a new framework allowing to leverage the information provided by sparse feature matches (point or segment) to guide a dense iterative optical flow estimation out of local minima. This allows to vastly increase the convergence basin without any loss of accuracy. A wide range of application is then possible, without modification, such as wide-baseline stereovision or non-rigid surface registration. Our method is one of the top ranking methods on the <a href="http://www.cvlibs.net/datasets/kitti/eval_stereo_flow_detail.php?benchmark=flow&amp;error=3&amp;eval=all&amp;result=5ca150bba490fec5afa8ee7beaeeed8f0fc585ac">KITTI</a> benchmark.</p>
<p class="standalone"><img alt="Reference image" src="http://www.braux-zin.com/images/daisy/4.png" title="Reference image" /> <img alt="Second image" src="http://www.braux-zin.com/images/daisy/2.png" title="Second image" /> <img alt="Depth" src="http://www.braux-zin.com/images/daisy/42.png" title="Computed depth map with detected self-occlusions in green" /></p>
<p class="standalone"><img alt="Wide-baseline stereo" src="http://www.braux-zin.com/images/dense_matching/daisy25.gif" style="width:auto;height:200px;" title="Wide-baseline stereo" /> <img alt="Wide-baseline non-rigid registration" src="http://www.braux-zin.com/images/dense_matching/michelle.gif" style="width:auto;height:200px;" title="Wide-baseline non-rigid registration" /></p>
</span>
</article>

Expand Down Expand Up @@ -222,10 +222,10 @@ <h3 class="title">A general dense image matching framework combining direct and
<span class="btitle"><em>International Conference on Computer Vision (ICCV)</em>.</span>
<span class="date">2013</span>
<span class="links">
<a href="javascript:disp('@inproceedings{\n brauxzin2013iccv,\n author = &amp;#34;Braux-Zin, Jim and Dupont, Romain and Bartoli, Adrien&amp;#34;,\n organization = &amp;#34;IEEE&amp;#34;,\n booktitle = &amp;#34;International Conference on Computer Vision (ICCV)&amp;#34;,\n year = &amp;#34;2013&amp;#34;,\n title = &amp;#34;A General Dense Image Matching Framework Combining Direct and Feature-based Costs&amp;#34;\n}\n\n');" class="bibtex">Bibtex</a>
<a href="javascript:disp('@inproceedings{\n brauxzin2013iccv,\n author = &amp;#34;Braux-Zin, Jim and Dupont, Romain and Bartoli, Adrien&amp;#34;,\n title = &amp;#34;A General Dense Image Matching Framework Combining Direct and Feature-based Costs&amp;#34;,\n booktitle = &amp;#34;International Conference on Computer Vision (ICCV)&amp;#34;,\n year = &amp;#34;2013&amp;#34;,\n organization = &amp;#34;IEEE&amp;#34;\n}\n\n');" class="bibtex">Bibtex</a>
<a href="/pdf/brauxzin_iccv2013.pdf" target="_blank" class="docs">PDF</a>


<a href="/pdf/brauxzin_iccv2013_slides.ppt" target="_blank" class="docs">Slides</a>
<a href="/pdf/brauxzin_iccv2013_poster.pdf" target="_blank" class="docs">Poster</a>
</span>
</li>
<li class="publication">
Expand Down
6 changes: 3 additions & 3 deletions category/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ <h1 itemprop="name">
<br/><br/>
I am expected to defend my thesis in <strong>june 2014</strong>. I am mainly looking for exciting Computer Vision projects but I am open to any challenge, I would especially like to try my hand at some machine learning or big data applications. A full resume is available in <a href="http://www.braux-zin.com/pdf/brauxzin_resume.pdf">English</a> or in <a href="http://www.braux-zin.com/pdf/brauxzin_cv.pdf">French</a>.</p>
<h1 id="education">Education</h1>
<p>I enrolled in a <em>double-degree</em> Master's degree at <a href="http://www.supelec.fr/374_p_14603/welcome.html">Supélec</a> (Paris, France) and <a href="http://www.kth.se/en">The Royal Institute of Technology (KTH)</a> (Stockholm, Sweden). I majored in digital communications and signal processing with minors in robotics and computer vision. I was deeply implicated in student associative projects such as <a href="http://enactus.org/">Enactus</a>.</p>
<p>I enrolled in a <em>double-degree</em> Master's degree at <a href="http://www.supelec.fr/374_p_14603/welcome.html">Supélec</a> (Paris, France) and <a href="http://www.kth.se/en">The Royal Institute of Technology (KTH)</a> (Stockholm, Sweden). I majored in digital communications and signal processing with minors in robotics and computer vision. I was deeply involved in student associative projects such as <a href="http://enactus.org/">Enactus</a>.</p>
<section class="skills-section">
<h1>Scientific skills</h1>

Expand Down Expand Up @@ -131,7 +131,7 @@ <h2>2D Computer Vision</h2>
<h2>Continuous Optimization</h2>
<ul>
<li>Convex optimization (Gauss&#8209;Newton, Levenberg&#8209;Marquardt)</li>
<li>Total Variation regularization</li>
<li>Total (Generalized) Variation regularization</li>
<li>Global optimization by Particle Swarm Optimization</li>
</ul>
</div>
Expand Down Expand Up @@ -181,7 +181,7 @@ <h1 id="optical-see-through-augmented-reality">Optical-See-Through Augmented Rea
Augmented Reality could be of great help for critical applications such as driving or surgery assistance. However in these cases every millisecond counts and the user cannot afford to add any latency to reality. This dismisses all <em>video see-through</em> solutions for <em>optical see-through</em> ones, where virtual augmentations are layered onto the reality thanks to a semi-transparent display. This adds new constraints on the system for proper alignment with reality. We focused on a tablet-like system composed of a transparent LCD screen and two localization devices (one to compute the pose relative to the environment and the other to locate the user). We believe this kind of systems would be more practical to the user (well-delimited window, no heavy head-mounted device) and the designer (less constraint on the weight, slower motion) relative to current head-mounted displays.</p>
<h1 id="combining-direct-and-feature-based-costs-for-optical-flow-and-stereovision">Combining Direct and Feature-Based Costs for Optical Flow and Stereovision</h1>
<p>The estimation of a dense motion field (optical flow) is a very important building block for many computer vision tasks such as 3d reconstruction. We introduce a new framework allowing to leverage the information provided by sparse feature matches (point or segment) to guide a dense iterative optical flow estimation out of local minima. This allows to vastly increase the convergence basin without any loss of accuracy. A wide range of application is then possible, without modification, such as wide-baseline stereovision or non-rigid surface registration. Our method is one of the top ranking methods on the <a href="http://www.cvlibs.net/datasets/kitti/eval_stereo_flow_detail.php?benchmark=flow&amp;error=3&amp;eval=all&amp;result=5ca150bba490fec5afa8ee7beaeeed8f0fc585ac">KITTI</a> benchmark.</p>
<p class="standalone"><img alt="Reference image" src="http://www.braux-zin.com/images/daisy/4.png" title="Reference image" /> <img alt="Second image" src="http://www.braux-zin.com/images/daisy/2.png" title="Second image" /> <img alt="Depth" src="http://www.braux-zin.com/images/daisy/42.png" title="Computed depth map with detected self-occlusions in green" /></p>
<p class="standalone"><img alt="Wide-baseline stereo" src="http://www.braux-zin.com/images/dense_matching/daisy25.gif" style="width:auto;height:200px;" title="Wide-baseline stereo" /> <img alt="Wide-baseline non-rigid registration" src="http://www.braux-zin.com/images/dense_matching/michelle.gif" style="width:auto;height:200px;" title="Wide-baseline non-rigid registration" /></p>
</span>
</article>

Expand Down
6 changes: 3 additions & 3 deletions feeds/all.atom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@
Augmented Reality could be of great help for critical applications such as driving or surgery assistance. However in these cases every millisecond counts and the user cannot afford to add any latency to reality. This dismisses all &lt;em&gt;video see-through&lt;/em&gt; solutions for &lt;em&gt;optical see-through&lt;/em&gt; ones, where virtual augmentations are layered onto the reality thanks to a semi-transparent display. This adds new constraints on the system for proper alignment with reality. We focused on a tablet-like system composed of a transparent LCD screen and two localization devices (one to compute the pose relative to the environment and the other to locate the user). We believe this kind of systems would be more practical to the user (well-delimited window, no heavy head-mounted device) and the designer (less constraint on the weight, slower motion) relative to current head-mounted displays.&lt;/p&gt;
&lt;h1 id="combining-direct-and-feature-based-costs-for-optical-flow-and-stereovision"&gt;Combining Direct and Feature-Based Costs for Optical Flow and Stereovision&lt;/h1&gt;
&lt;p&gt;The estimation of a dense motion field (optical flow) is a very important building block for many computer vision tasks such as 3d reconstruction. We introduce a new framework allowing to leverage the information provided by sparse feature matches (point or segment) to guide a dense iterative optical flow estimation out of local minima. This allows to vastly increase the convergence basin without any loss of accuracy. A wide range of application is then possible, without modification, such as wide-baseline stereovision or non-rigid surface registration. Our method is one of the top ranking methods on the &lt;a href="http://www.cvlibs.net/datasets/kitti/eval_stereo_flow_detail.php?benchmark=flow&amp;amp;error=3&amp;amp;eval=all&amp;amp;result=5ca150bba490fec5afa8ee7beaeeed8f0fc585ac"&gt;KITTI&lt;/a&gt; benchmark.&lt;/p&gt;
&lt;p class="standalone"&gt;&lt;img alt="Reference image" src="http://www.braux-zin.com/images/daisy/4.png" title="Reference image" /&gt; &lt;img alt="Second image" src="http://www.braux-zin.com/images/daisy/2.png" title="Second image" /&gt; &lt;img alt="Depth" src="http://www.braux-zin.com/images/daisy/42.png" title="Computed depth map with detected self-occlusions in green" /&gt;&lt;/p&gt;</summary></entry><entry><title>Resume</title><link href="http://www.braux-zin.com/index.html#resume" rel="alternate"></link><updated>2013-10-25T00:00:00+02:00</updated><author><name>Jim Braux-Zin</name></author><id>tag:www.braux-zin.com,2013-10-25:index.html#resume</id><summary type="html">&lt;p&gt;&lt;img alt="Profile picture" src="http://www.braux-zin.com/images/jim-braux-zin.jpg" style="width:150px;border-radius:50%;" /&gt;
&lt;p class="standalone"&gt;&lt;img alt="Wide-baseline stereo" src="http://www.braux-zin.com/images/dense_matching/daisy25.gif" style="width:auto;height:200px;" title="Wide-baseline stereo" /&gt; &lt;img alt="Wide-baseline non-rigid registration" src="http://www.braux-zin.com/images/dense_matching/michelle.gif" style="width:auto;height:200px;" title="Wide-baseline non-rigid registration" /&gt;&lt;/p&gt;</summary></entry><entry><title>Resume</title><link href="http://www.braux-zin.com/index.html#resume" rel="alternate"></link><updated>2013-10-25T00:00:00+02:00</updated><author><name>Jim Braux-Zin</name></author><id>tag:www.braux-zin.com,2013-10-25:index.html#resume</id><summary type="html">&lt;p&gt;&lt;img alt="Profile picture" src="http://www.braux-zin.com/images/jim-braux-zin.jpg" style="width:150px;border-radius:50%;" /&gt;
Hi! My name is Jim and I am a French Ph.D. candidate in Computer Vision. I work at &lt;a href="http://www.kalisteo.fr/en/index.htm"&gt;CEA LIST&lt;/a&gt; under the supervision of &lt;a href="http://isit.u-clermont1.fr/~ab/"&gt;Adrien Bartoli&lt;/a&gt;. I am addressing exciting things such as &lt;em&gt;augmented reality&lt;/em&gt;, &lt;em&gt;3d localization&lt;/em&gt;, &lt;em&gt;3d reconstruction&lt;/em&gt; and &lt;em&gt;non-rigid surface registration&lt;/em&gt;.
&lt;br/&gt;&lt;br/&gt;
I am expected to defend my thesis in &lt;strong&gt;june 2014&lt;/strong&gt;. I am mainly looking for exciting Computer Vision projects but I am open to any challenge, I would especially like to try my hand at some machine learning or big data applications. A full resume is available in &lt;a href="http://www.braux-zin.com/pdf/brauxzin_resume.pdf"&gt;English&lt;/a&gt; or in &lt;a href="http://www.braux-zin.com/pdf/brauxzin_cv.pdf"&gt;French&lt;/a&gt;.&lt;/p&gt;
&lt;h1 id="education"&gt;Education&lt;/h1&gt;
&lt;p&gt;I enrolled in a &lt;em&gt;double-degree&lt;/em&gt; Master's degree at &lt;a href="http://www.supelec.fr/374_p_14603/welcome.html"&gt;Supélec&lt;/a&gt; (Paris, France) and &lt;a href="http://www.kth.se/en"&gt;The Royal Institute of Technology (KTH)&lt;/a&gt; (Stockholm, Sweden). I majored in digital communications and signal processing with minors in robotics and computer vision. I was deeply implicated in student associative projects such as &lt;a href="http://enactus.org/"&gt;Enactus&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I enrolled in a &lt;em&gt;double-degree&lt;/em&gt; Master's degree at &lt;a href="http://www.supelec.fr/374_p_14603/welcome.html"&gt;Supélec&lt;/a&gt; (Paris, France) and &lt;a href="http://www.kth.se/en"&gt;The Royal Institute of Technology (KTH)&lt;/a&gt; (Stockholm, Sweden). I majored in digital communications and signal processing with minors in robotics and computer vision. I was deeply involved in student associative projects such as &lt;a href="http://enactus.org/"&gt;Enactus&lt;/a&gt;.&lt;/p&gt;
&lt;section class="skills-section"&gt;
&lt;h1&gt;Scientific skills&lt;/h1&gt;

Expand Down Expand Up @@ -38,7 +38,7 @@ I am expected to defend my thesis in &lt;strong&gt;june 2014&lt;/strong&gt;. I a
&lt;h2&gt;Continuous Optimization&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Convex optimization (Gauss&amp;#8209;Newton, Levenberg&amp;#8209;Marquardt)&lt;/li&gt;
&lt;li&gt;Total Variation regularization&lt;/li&gt;
&lt;li&gt;Total (Generalized) Variation regularization&lt;/li&gt;
&lt;li&gt;Global optimization by Particle Swarm Optimization&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
Expand Down
Loading

0 comments on commit ed27a52

Please sign in to comment.