Skip to content

Commit

Permalink
Update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
jimbz committed Nov 4, 2013
1 parent 2ab3cf7 commit 17540af
Show file tree
Hide file tree
Showing 15 changed files with 43 additions and 50 deletions.
28 changes: 11 additions & 17 deletions author/jim-braux-zin.html
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
<![endif]-->

<!-- LESS pre-processor -->
<link rel="stylesheet" href="http://www.braux-zin.com/theme/css/style.min.css?3f0cf84a">
<link rel="stylesheet" href="http://www.braux-zin.com/theme/css/style.min.css?2bbf01f4">

<title>Jim Braux-Zin's homepage - Articles by Jim Braux-Zin</title>

Expand Down Expand Up @@ -96,7 +96,7 @@ <h1>
</h1>
</header>
<p><img alt="Profile picture" src="http://www.braux-zin.com/images/jim-braux-zin.jpg" style="width:150px;border-radius:50%;" />
Hi! My name is Jim and I am a French Ph.D. candidate in Computer Vision. I work at <a href="http://www.kalisteo.fr/en/index.htm">CEA LIST</a> under the supervision of <a href="http://isit.u-clermont1.fr/~ab/">Adrien Bartoli</a>. I am addressing exciting things such as <strong>augmented reality</strong>, <strong>3d localization</strong>, <strong>3d reconstruction</strong> and <strong>non-rigid surface registration</strong>. I am expected to defend my thesis on june 2014. A full resume is available in <a href="http://www.braux-zin.com/pdf/brauxzin_resume.pdf">English</a> or in <a href="http://www.braux-zin.com/pdf/brauxzin_cv.pdf">French</a>.</p>
Hi! My name is Jim and I am a French Ph.D. candidate in Computer Vision. I work at <a href="http://www.kalisteo.fr/en/index.htm">CEA LIST</a> under the supervision of <a href="http://isit.u-clermont1.fr/~ab/">Adrien Bartoli</a>. I am addressing exciting things such as <em>augmented reality</em>, <em>3d localization</em>, <em>3d reconstruction</em> and <em>non-rigid surface registration</em>. I am expected to defend my thesis on <strong>june 2014</strong>. A full resume is available in <a href="http://www.braux-zin.com/pdf/brauxzin_resume.pdf">English</a> or in <a href="http://www.braux-zin.com/pdf/brauxzin_cv.pdf">French</a>.</p>
<h1 id="education">Education</h1>
<p>I enrolled in a <em>double-degree</em> Master's degree at <a href="http://www.supelec.fr/374_p_14603/welcome.html">Supélec</a> (Paris, France) and <a href="http://www.kth.se/en">The Royal Institute of Technology (KTH)</a> (Stockholm, Sweden). I majored in digital communications and signal processing with minors in robotics and computer vision. I was deeply implicated in student associative projects such as <a href="http://enactus.org/">Enactus</a>.</p>
<section class="skills-section">
Expand Down Expand Up @@ -173,7 +173,7 @@ <h1 id="optical-see-through-augmented-reality">Optical-See-Through Augmented Rea
<p><img alt="Our optical see-through prototype" src="http://www.braux-zin.com/images/seethrough/system.jpg" />
Augmented Reality could be of great help for critical applications such as driving or surgery assistance. However in these cases every millisecond counts and the user cannot afford to add any latency to reality. This dismisses all <em>video see-through</em> solutions for <em>optical see-through</em> ones, where virtual augmentations are layered onto the reality thanks to a semi-transparent display. This adds new constraints on the system for proper alignment with reality. We focused on a tablet-like system composed of a transparent LCD screen and two localization devices (one to compute the pose relative to the environment and the other to locate the user). We believe this kind of systems would be more practical to the user (well-delimited window, no heavy head-mounted device) and the designer (less constraint on the weight, slower motion) relative to current head-mounted displays.</p>
<h1 id="combining-direct-and-feature-based-costs-for-optical-flow-and-stereovision">Combining Direct and Feature-Based Costs for Optical Flow and Stereovision</h1>
<p>The estimation of a dense motion field (optical flow) is a very important building block for many computer vision tasks such as 3d reconstruction. We introduce a new framework allowing to leverage the information provided by sparse feature matches (point or segment) to guide a dense iterative optical flow estimation out of local minima. This allows to vastly increase the convergence basin without any loss of accuracy. A wide range of application is then possible, without modification, such as wide-baseline stereovision or non-rigid surface registration.</p>
<p>The estimation of a dense motion field (optical flow) is a very important building block for many computer vision tasks such as 3d reconstruction. We introduce a new framework allowing to leverage the information provided by sparse feature matches (point or segment) to guide a dense iterative optical flow estimation out of local minima. This allows to vastly increase the convergence basin without any loss of accuracy. A wide range of application is then possible, without modification, such as wide-baseline stereovision or non-rigid surface registration. Our method is one of the top ranking methods on the <a href="http://www.cvlibs.net/datasets/kitti/eval_stereo_flow_detail.php?benchmark=flow&amp;error=3&amp;eval=all&amp;result=5ca150bba490fec5afa8ee7beaeeed8f0fc585ac">KITTI</a> benchmark.</p>
<p class="standalone"><img alt="Reference image" src="http://www.braux-zin.com/images/daisy/4.png" title="Reference image" /> <img alt="Second image" src="http://www.braux-zin.com/images/daisy/2.png" title="Second image" /> <img alt="Depth" src="http://www.braux-zin.com/images/daisy/42.png" title="Computed depth map with detected self-occlusions in green" /></p>
</article>

Expand Down Expand Up @@ -203,12 +203,10 @@ <h3 class="title">Combining features and intensity for wide-baseline non-rigid s
<span class="btitle"><em>British Machine Vision Conference (BMVC)</em>.</span>
<span class="date">2013</span>
<span class="links">
[&nbsp;<a href="javascript:disp('@inproceedings{\n brauxzin2013bmvc,\n author = &amp;#34;Braux-Zin, Jim and Dupont, Romain and Bartoli, Adrien&amp;#34;,\n organization = &amp;#34;BMVA&amp;#34;,\n booktitle = &amp;#34;British Machine Vision Conference (BMVC)&amp;#34;,\n year = &amp;#34;2013&amp;#34;,\n title = &amp;#34;Combining Features and Intensity for Wide-Baseline Non-Rigid Surface Registration&amp;#34;\n}\n\n');">Bibtex</a>&nbsp;]
[&nbsp;<a href="/pdf/brauxzin_bmvc2013.pdf">PDF</a>&nbsp;]
</span>

</span>
<a href="javascript:disp('@inproceedings{\n brauxzin2013bmvc,\n author = &amp;#34;Braux-Zin, Jim and Dupont, Romain and Bartoli, Adrien&amp;#34;,\n title = &amp;#34;Combining Features and Intensity for Wide-Baseline Non-Rigid Surface Registration&amp;#34;,\n booktitle = &amp;#34;British Machine Vision Conference (BMVC)&amp;#34;,\n year = &amp;#34;2013&amp;#34;,\n organization = &amp;#34;BMVA&amp;#34;\n}\n\n');" class="bibtex">Bibtex</a>
<a href="/pdf/brauxzin_bmvc2013.pdf" target="_blank" class="docs">PDF</a>

<a href="/pdf/brauxzin_bmvc2013_poster.pdf" target="_blank" class="docs">Poster</a>
</span>
</li>
<li class="publication">
Expand All @@ -217,11 +215,9 @@ <h3 class="title">A general dense image matching framework combining direct and
<span class="btitle"><em>International Conference on Computer Vision (ICCV)</em>.</span>
<span class="date">2013</span>
<span class="links">
[&nbsp;<a href="javascript:disp('@inproceedings{\n brauxzin2013iccv,\n author = &amp;#34;Braux-Zin, Jim and Dupont, Romain and Bartoli, Adrien&amp;#34;,\n organization = &amp;#34;IEEE&amp;#34;,\n booktitle = &amp;#34;International Conference on Computer Vision (ICCV)&amp;#34;,\n year = &amp;#34;2013&amp;#34;,\n title = &amp;#34;A General Dense Image Matching Framework Combining Direct and Feature-based Costs&amp;#34;\n}\n\n');">Bibtex</a>&nbsp;]
[&nbsp;<a href="/pdf/brauxzin_iccv2013.pdf">PDF</a>&nbsp;]
</span>
<a href="javascript:disp('@inproceedings{\n brauxzin2013iccv,\n author = &amp;#34;Braux-Zin, Jim and Dupont, Romain and Bartoli, Adrien&amp;#34;,\n organization = &amp;#34;IEEE&amp;#34;,\n booktitle = &amp;#34;International Conference on Computer Vision (ICCV)&amp;#34;,\n year = &amp;#34;2013&amp;#34;,\n title = &amp;#34;A General Dense Image Matching Framework Combining Direct and Feature-based Costs&amp;#34;\n}\n\n');" class="bibtex">Bibtex</a>
<a href="/pdf/brauxzin_iccv2013.pdf" target="_blank" class="docs">PDF</a>

</span>

</span>
</li>
Expand All @@ -231,12 +227,10 @@ <h3 class="title">Calibrating an optical see-through rig with two non-overlappin
<span class="btitle"><em>3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT)</em>.</span>
<span class="date">2012</span>
<span class="links">
[&nbsp;<a href="javascript:disp('@inproceedings{\n brauxzin20123dimpvt,\n author = &amp;#34;Braux-Zin, Jim and Bartoli, Adrien and Dupont, Romain and Vinciguerra, Regis&amp;#34;,\n title = &amp;#34;Calibrating an Optical See-Through Rig with Two Non-overlapping Cameras: The Virtual Camera Framework&amp;#34;,\n booktitle = &amp;#34;3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT)&amp;#34;,\n year = &amp;#34;2012&amp;#34;,\n organization = &amp;#34;IEEE&amp;#34;,\n pages = &amp;#34;308--315&amp;#34;\n}\n\n');">Bibtex</a>&nbsp;]
[&nbsp;<a href="/pdf/brauxzin_3dimpvt2012.pdf">PDF</a>&nbsp;]
</span>

</span>
<a href="javascript:disp('@inproceedings{\n brauxzin20123dimpvt,\n author = &amp;#34;Braux-Zin, Jim and Bartoli, Adrien and Dupont, Romain and Vinciguerra, Regis&amp;#34;,\n title = &amp;#34;Calibrating an Optical See-Through Rig with Two Non-overlapping Cameras: The Virtual Camera Framework&amp;#34;,\n booktitle = &amp;#34;3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT)&amp;#34;,\n year = &amp;#34;2012&amp;#34;,\n organization = &amp;#34;IEEE&amp;#34;,\n pages = &amp;#34;308--315&amp;#34;\n}\n\n');" class="bibtex">Bibtex</a>
<a href="/pdf/brauxzin_3dimpvt2012.pdf" target="_blank" class="docs">PDF</a>

<a href="/pdf/brauxzin_3dimpvt2012_poster.pdf" target="_blank" class="docs">Poster</a>
</span>
</li>
</ul>
Expand Down
2 changes: 1 addition & 1 deletion categories.html
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
<![endif]-->

<!-- LESS pre-processor -->
<link rel="stylesheet" href="http://www.braux-zin.com/theme/css/style.min.css?3f0cf84a">
<link rel="stylesheet" href="http://www.braux-zin.com/theme/css/style.min.css?2bbf01f4">

<title>Jim Braux-Zin's homepage</title>

Expand Down
6 changes: 3 additions & 3 deletions category/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
<![endif]-->

<!-- LESS pre-processor -->
<link rel="stylesheet" href="http://www.braux-zin.com/theme/css/style.min.css?3f0cf84a">
<link rel="stylesheet" href="http://www.braux-zin.com/theme/css/style.min.css?2bbf01f4">

<title>Jim Braux-Zin's homepage</title>

Expand Down Expand Up @@ -97,7 +97,7 @@ <h1>
</h1>
</header>
<p><img alt="Profile picture" src="http://www.braux-zin.com/images/jim-braux-zin.jpg" style="width:150px;border-radius:50%;" />
Hi! My name is Jim and I am a French Ph.D. candidate in Computer Vision. I work at <a href="http://www.kalisteo.fr/en/index.htm">CEA LIST</a> under the supervision of <a href="http://isit.u-clermont1.fr/~ab/">Adrien Bartoli</a>. I am addressing exciting things such as <strong>augmented reality</strong>, <strong>3d localization</strong>, <strong>3d reconstruction</strong> and <strong>non-rigid surface registration</strong>. I am expected to defend my thesis on june 2014. A full resume is available in <a href="http://www.braux-zin.com/pdf/brauxzin_resume.pdf">English</a> or in <a href="http://www.braux-zin.com/pdf/brauxzin_cv.pdf">French</a>.</p>
Hi! My name is Jim and I am a French Ph.D. candidate in Computer Vision. I work at <a href="http://www.kalisteo.fr/en/index.htm">CEA LIST</a> under the supervision of <a href="http://isit.u-clermont1.fr/~ab/">Adrien Bartoli</a>. I am addressing exciting things such as <em>augmented reality</em>, <em>3d localization</em>, <em>3d reconstruction</em> and <em>non-rigid surface registration</em>. I am expected to defend my thesis on <strong>june 2014</strong>. A full resume is available in <a href="http://www.braux-zin.com/pdf/brauxzin_resume.pdf">English</a> or in <a href="http://www.braux-zin.com/pdf/brauxzin_cv.pdf">French</a>.</p>
<h1 id="education">Education</h1>
<p>I enrolled in a <em>double-degree</em> Master's degree at <a href="http://www.supelec.fr/374_p_14603/welcome.html">Supélec</a> (Paris, France) and <a href="http://www.kth.se/en">The Royal Institute of Technology (KTH)</a> (Stockholm, Sweden). I majored in digital communications and signal processing with minors in robotics and computer vision. I was deeply implicated in student associative projects such as <a href="http://enactus.org/">Enactus</a>.</p>
<section class="skills-section">
Expand Down Expand Up @@ -174,7 +174,7 @@ <h1 id="optical-see-through-augmented-reality">Optical-See-Through Augmented Rea
<p><img alt="Our optical see-through prototype" src="http://www.braux-zin.com/images/seethrough/system.jpg" />
Augmented Reality could be of great help for critical applications such as driving or surgery assistance. However in these cases every millisecond counts and the user cannot afford to add any latency to reality. This dismisses all <em>video see-through</em> solutions for <em>optical see-through</em> ones, where virtual augmentations are layered onto the reality thanks to a semi-transparent display. This adds new constraints on the system for proper alignment with reality. We focused on a tablet-like system composed of a transparent LCD screen and two localization devices (one to compute the pose relative to the environment and the other to locate the user). We believe this kind of systems would be more practical to the user (well-delimited window, no heavy head-mounted device) and the designer (less constraint on the weight, slower motion) relative to current head-mounted displays.</p>
<h1 id="combining-direct-and-feature-based-costs-for-optical-flow-and-stereovision">Combining Direct and Feature-Based Costs for Optical Flow and Stereovision</h1>
<p>The estimation of a dense motion field (optical flow) is a very important building block for many computer vision tasks such as 3d reconstruction. We introduce a new framework allowing to leverage the information provided by sparse feature matches (point or segment) to guide a dense iterative optical flow estimation out of local minima. This allows to vastly increase the convergence basin without any loss of accuracy. A wide range of application is then possible, without modification, such as wide-baseline stereovision or non-rigid surface registration.</p>
<p>The estimation of a dense motion field (optical flow) is a very important building block for many computer vision tasks such as 3d reconstruction. We introduce a new framework allowing to leverage the information provided by sparse feature matches (point or segment) to guide a dense iterative optical flow estimation out of local minima. This allows to vastly increase the convergence basin without any loss of accuracy. A wide range of application is then possible, without modification, such as wide-baseline stereovision or non-rigid surface registration. Our method is one of the top ranking methods on the <a href="http://www.cvlibs.net/datasets/kitti/eval_stereo_flow_detail.php?benchmark=flow&amp;error=3&amp;eval=all&amp;result=5ca150bba490fec5afa8ee7beaeeed8f0fc585ac">KITTI</a> benchmark.</p>
<p class="standalone"><img alt="Reference image" src="http://www.braux-zin.com/images/daisy/4.png" title="Reference image" /> <img alt="Second image" src="http://www.braux-zin.com/images/daisy/2.png" title="Second image" /> <img alt="Depth" src="http://www.braux-zin.com/images/daisy/42.png" title="Computed depth map with detected self-occlusions in green" /></p>
</article>

Expand Down
Loading

0 comments on commit 17540af

Please sign in to comment.