2023 |
Worchel, Markus; Alexa, Marc Differentiable Shadow Mapping for Efficient Inverse Graphics (Inproceeding) Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 142-153, 2023. @inproceedings{worchel:2023:diff_shadow, title = {Differentiable Shadow Mapping for Efficient Inverse Graphics}, author = {Markus Worchel and Marc Alexa}, url = {https://openaccess.thecvf.com/content/CVPR2023/html/Worchel_Differentiable_Shadow_Mapping_for_Efficient_Inverse_Graphics_CVPR_2023_paper.html, CVF Open Access Version https://mworchel.github.io/differentiable-shadow-mapping/, Project Page https://github.com/mworchel/differentiable-shadow-mapping, Code}, year = {2023}, date = {2023-06-01}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, pages = {142-153}, abstract = {We show how shadows can be efficiently generated in differentiable rendering of triangle meshes. Our central observation is that pre-filtered shadow mapping, a technique for approximating shadows based on rendering from the perspective of a light, can be combined with existing differentiable rasterizers to yield differentiable visibility information. We demonstrate at several inverse graphics problems that differentiable shadow maps are orders of magnitude faster than differentiable light transport simulation with similar accuracy -- while differentiable rasterization without shadows often fails to converge. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We show how shadows can be efficiently generated in differentiable rendering of triangle meshes. Our central observation is that pre-filtered shadow mapping, a technique for approximating shadows based on rendering from the perspective of a light, can be combined with existing differentiable rasterizers to yield differentiable visibility information. We demonstrate at several inverse graphics problems that differentiable shadow maps are orders of magnitude faster than differentiable light transport simulation with similar accuracy -- while differentiable rasterization without shadows often fails to converge. |
2022 |
Alexa, Marc α-Functions: Piecewise-linear Approximation from Noisy and Hermite Data (Inproceeding) ACM SIGGRAPH 2022 Conference Proceedings, pp. 1-9, 2022. @inproceedings{alexa:2022:alpha_functions, title = {α-Functions: Piecewise-linear Approximation from Noisy and Hermite Data}, author = {Marc Alexa}, url = {https://dl.acm.org/doi/abs/10.1145/3528233.3530743, DOI https://www.cg.tu-berlin.de/research/projects/alpha-functions/, Project Page http://cybertron.cg.tu-berlin.de/~alexa/alpha-sig-preprint.pdf, Preprint}, year = {2022}, date = {2022-07-27}, booktitle = {ACM SIGGRAPH 2022 Conference Proceedings}, pages = {1-9}, abstract = {We introduce α-functions, providing piecewise linear approximation to given data as the difference of two convex functions. The parameter α controls the shape of a paraboloid that is probing the data and may be used to filter out noise in the data. The use of convex functions enables tools for efficient approximation to the data, adding robustness to outliers, and dealing with gradient information. It also allows using the approach in higher dimension. We show that α-functions can be efficiently computed and demonstrate their versatility at the example of surface reconstruction from noisy surface samples.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We introduce α-functions, providing piecewise linear approximation to given data as the difference of two convex functions. The parameter α controls the shape of a paraboloid that is probing the data and may be used to filter out noise in the data. The use of convex functions enables tools for efficient approximation to the data, adding robustness to outliers, and dealing with gradient information. It also allows using the approach in higher dimension. We show that α-functions can be efficiently computed and demonstrate their versatility at the example of surface reconstruction from noisy surface samples. |
Alexa, Marc Super-Fibonacci Spirals: Fast, Low-Discrepancy Sampling of SO(3) (Inproceeding) Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8291-8300, 2022. @inproceedings{Alexa:2022:SFS, title = {Super-Fibonacci Spirals: Fast, Low-Discrepancy Sampling of SO(3)}, author = {Marc Alexa}, url = {https://openaccess.thecvf.com/content/CVPR2022/html/Alexa_Super-Fibonacci_Spirals_Fast_Low-Discrepancy_Sampling_of_SO3_CVPR_2022_paper.html, CVF Open Access Version https://marcalexa.github.io/superfibonacci/, Project on Github}, year = {2022}, date = {2022-06-01}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, pages = {8291-8300}, abstract = {Super-Fibonacci spirals are an extension of Fibonacci spirals, enabling fast generation of an arbitrary but fixed number of 3D orientations. The algorithm is simple and fast. A comprehensive evaluation comparing to other methods shows that the generated sets of orientations have low discrepancy, minimal spurious components in the power spectrum, and almost identical Voronoi volumes. This makes them useful for a variety of applications in vision, robotics, machine learning, and in particular Monte Carlo sampling.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Super-Fibonacci spirals are an extension of Fibonacci spirals, enabling fast generation of an arbitrary but fixed number of 3D orientations. The algorithm is simple and fast. A comprehensive evaluation comparing to other methods shows that the generated sets of orientations have low discrepancy, minimal spurious components in the power spectrum, and almost identical Voronoi volumes. This makes them useful for a variety of applications in vision, robotics, machine learning, and in particular Monte Carlo sampling. |
Worchel, Markus; Diaz, Rodrigo; Hu, Weiwen; Schreer, Oliver; Feldmann, Ingo; Eisert, Peter Multi-View Mesh Reconstruction With Neural Deferred Shading (Inproceeding) Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6187-6197, 2022. @inproceedings{Worchel:2022:NDS, title = {Multi-View Mesh Reconstruction With Neural Deferred Shading}, author = {Markus Worchel and Rodrigo Diaz and Weiwen Hu and Oliver Schreer and Ingo Feldmann and Peter Eisert}, url = {https://openaccess.thecvf.com/content/CVPR2022/html/Worchel_Multi-View_Mesh_Reconstruction_With_Neural_Deferred_Shading_CVPR_2022_paper.html, CVF Open Access Version https://fraunhoferhhi.github.io/neural-deferred-shading/, Project Page https://github.com/fraunhoferhhi/neural-deferred-shading, Code}, year = {2022}, date = {2022-06-01}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, pages = {6187-6197}, abstract = {We propose an analysis-by-synthesis method for fast multi-view 3D reconstruction of opaque objects with arbitrary materials and illumination. State-of-the-art methods use both neural surface representations and neural rendering. While flexible, neural surface representations are a significant bottleneck in optimization runtime. Instead, we represent surfaces as triangle meshes and build a differentiable rendering pipeline around triangle rasterization and neural shading. The renderer is used in a gradient descent optimization where both a triangle mesh and a neural shader are jointly optimized to reproduce the multi-view images. We evaluate our method on a public 3D reconstruction dataset and show that it can match the reconstruction accuracy of traditional baselines and neural approaches while surpassing them in optimization runtime. Additionally, we investigate the shader and find that it learns an interpretable representation of appearance, enabling applications such as 3D material editing.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We propose an analysis-by-synthesis method for fast multi-view 3D reconstruction of opaque objects with arbitrary materials and illumination. State-of-the-art methods use both neural surface representations and neural rendering. While flexible, neural surface representations are a significant bottleneck in optimization runtime. Instead, we represent surfaces as triangle meshes and build a differentiable rendering pipeline around triangle rasterization and neural shading. The renderer is used in a gradient descent optimization where both a triangle mesh and a neural shader are jointly optimized to reproduce the multi-view images. We evaluate our method on a public 3D reconstruction dataset and show that it can match the reconstruction accuracy of traditional baselines and neural approaches while surpassing them in optimization runtime. Additionally, we investigate the shader and find that it learns an interpretable representation of appearance, enabling applications such as 3D material editing. |
2021 |
Koch, Sebastian; Piadyk, Yurii; Worchel, Markus; Alexa, Marc; Silva, Claudio; Zorin, Denis; Panozzo, Daniele Hardware Design and Accurate Simulation of Structured-Light Scanning for Benchmarking of 3D Reconstruction Algorithms (Incollection) Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021), 2021. @incollection{Koch:2021:HDA, title = {Hardware Design and Accurate Simulation of Structured-Light Scanning for Benchmarking of 3D Reconstruction Algorithms}, author = {Sebastian Koch and Yurii Piadyk and Markus Worchel and Marc Alexa and Claudio Silva and Denis Zorin and Daniele Panozzo}, url = {https://geometryprocessing.github.io/scanner-sim, Project Page}, year = {2021}, date = {2021-10-10}, booktitle = {Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021)}, issuetitle = {Datasets and Benchmarks Track}, abstract = {Images of a real scene taken with a camera commonly differ from synthetic images of a virtual replica of the same scene, despite advances in light transport simulation and calibration. By explicitly co-developing the Structured-Light Scanning (SLS) hardware and rendering pipeline we are able to achieve negligible per-pixel difference between the real image and the synthesized image on geometrically complex calibration objects with known material properties. This approach provides an ideal test-bed for developing and evaluating data-driven algorithms in the area of 3D reconstruction, as the synthetic data is indistinguishable from real data and can be generated at large scale by simulation. We propose three benchmark challenges using a combination of acquired and synthetic data generated with our system: (1) a denoising benchmark tailored to structured-light scanning, (2) a shape completion benchmark to fill in missing data, and (3) a benchmark for surface reconstruction from dense point clouds. Besides, we provide a large collection of high-resolution scans that allow to use our system and benchmarks without reproduction of the hardware setup on our website}, howpublished = {https://openreview.net/forum?id=bNL5VlTfe3p}, keywords = {}, pubstate = {published}, tppubtype = {incollection} } Images of a real scene taken with a camera commonly differ from synthetic images of a virtual replica of the same scene, despite advances in light transport simulation and calibration. By explicitly co-developing the Structured-Light Scanning (SLS) hardware and rendering pipeline we are able to achieve negligible per-pixel difference between the real image and the synthesized image on geometrically complex calibration objects with known material properties. This approach provides an ideal test-bed for developing and evaluating data-driven algorithms in the area of 3D reconstruction, as the synthetic data is indistinguishable from real data and can be generated at large scale by simulation. We propose three benchmark challenges using a combination of acquired and synthetic data generated with our system: (1) a denoising benchmark tailored to structured-light scanning, (2) a shape completion benchmark to fill in missing data, and (3) a benchmark for surface reconstruction from dense point clouds. Besides, we provide a large collection of high-resolution scans that allow to use our system and benchmarks without reproduction of the hardware setup on our website |
Bunge, Astrid; Botsch, Mario; Alexa, Marc The Diamond Laplace for Polygonal and Polyhedral Meshes (Journal Article) Computer Graphics Forum, 40 (5), pp. 217-230, 2021. @article{Bunge:DL:2021, title = {The Diamond Laplace for Polygonal and Polyhedral Meshes}, author = {Astrid Bunge and Mario Botsch and Marc Alexa}, url = {https://www.youtube.com/watch?v=i7mYiJSG2ss, Talk (YouTube)}, doi = {10.1111/cgf.14369}, year = {2021}, date = {2021-08-23}, journal = {Computer Graphics Forum}, volume = {40}, number = {5}, pages = {217-230}, abstract = {We introduce a construction for discrete gradient operators that can be directly applied to arbitrary polygonal surface as well as polyhedral volume meshes. The main idea is to associate the gradient of functions defined at vertices of the mesh with diamonds: the region spanned by a dual edge together with its corresponding primal element — an edge for surface meshes and a face for volumetric meshes. We call the operator resulting from taking the divergence of the gradient Diamond Laplacian. Additional vertices used for the construction are represented as affine combinations of the original vertices, so that the Laplacian operator maps from values at vertices to values at vertices, as is common in geometry processing applications. The construction is local, exactly the same for all types of meshes, and results in a symmetric negative definite operator with linear precision. We show that the accuracy of the Diamond Laplacian is similar or better compared to other discretizations. The greater versatility and generally good behavior come at the expense of an increase in the number of non-zero coefficients that depends on the degree of the mesh elements.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We introduce a construction for discrete gradient operators that can be directly applied to arbitrary polygonal surface as well as polyhedral volume meshes. The main idea is to associate the gradient of functions defined at vertices of the mesh with diamonds: the region spanned by a dual edge together with its corresponding primal element — an edge for surface meshes and a face for volumetric meshes. We call the operator resulting from taking the divergence of the gradient Diamond Laplacian. Additional vertices used for the construction are represented as affine combinations of the original vertices, so that the Laplacian operator maps from values at vertices to values at vertices, as is common in geometry processing applications. The construction is local, exactly the same for all types of meshes, and results in a symmetric negative definite operator with linear precision. We show that the accuracy of the Diamond Laplacian is similar or better compared to other discretizations. The greater versatility and generally good behavior come at the expense of an increase in the number of non-zero coefficients that depends on the degree of the mesh elements. |
Kohlbrenner, Max; Finnendahl, Ugo; Djuren, Tobias; Alexa, Marc Gauss Stylization: Interactive Artistic Mesh Modeling based on Preferred Surface Normals (Journal Article) Computer Graphics Forum, 40 (5), pp. 33-43, 2021. @article{Kohlbrenner2021, title = {Gauss Stylization: Interactive Artistic Mesh Modeling based on Preferred Surface Normals}, author = {Max Kohlbrenner and Ugo Finnendahl and Tobias Djuren and Marc Alexa }, url = {https://cybertron.cg.tu-berlin.de/projects/gaussStylization/}, doi = {https://doi.org/10.1111/cgf.14355}, year = {2021}, date = {2021-08-23}, journal = {Computer Graphics Forum}, volume = {40}, number = {5}, pages = {33-43}, abstract = {Abstract Extending the ARAP energy with a term that depends on the face normal, energy minimization becomes an effective stylization tool for shapes represented as meshes. Our approach generalizes the possibilities of Cubic Stylization: the set of preferred normals can be chosen arbitrarily from the Gauss sphere, including semi-discrete sets to model preference for cylinder- or cone-like shapes. The optimization is designed to retain, similar to ARAP, the constant linear system in the global optimization. This leads to convergence behavior that enables interactive control over the parameters of the optimization. We provide various examples demonstrating the simplicity and versatility of the approach.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Abstract Extending the ARAP energy with a term that depends on the face normal, energy minimization becomes an effective stylization tool for shapes represented as meshes. Our approach generalizes the possibilities of Cubic Stylization: the set of preferred normals can be chosen arbitrarily from the Gauss sphere, including semi-discrete sets to model preference for cylinder- or cone-like shapes. The optimization is designed to retain, similar to ARAP, the constant linear system in the global optimization. This leads to convergence behavior that enables interactive control over the parameters of the optimization. We provide various examples demonstrating the simplicity and versatility of the approach. |
Zhang, Jiayi Eris; Jacobson, Alec; Alexa, Marc Fast Updates for Least-Squares Rotational Alignment (Journal Article) Computer Graphics Forum, 40 (2), pp. 12-22, 2021. @article{Zhang:FRA:2021, title = {Fast Updates for Least-Squares Rotational Alignment}, author = {Jiayi Eris Zhang and Alec Jacobson and Marc Alexa}, url = {https://www.dgp.toronto.edu/projects/fast-rotation-fitting/, Project Page}, doi = {10.1111/cgf.142611}, year = {2021}, date = {2021-06-04}, journal = {Computer Graphics Forum}, volume = {40}, number = {2}, pages = {12-22}, abstract = {Across computer graphics, vision, robotics and simulation, many applications rely on determining the 3D rotation that aligns two objects or sets of points. The standard solution is to use singular value decomposition (SVD), where the optimal rotation is recovered as the product of the singular vectors. Faster computation of only the rotation is possible using suitable parameterizations of the rotations and iterative optimization. We propose such a method based on the Cayley transformations. The resulting optimization problem allows better local quadratic approximation compared to the Taylor approximation of the exponential map. This results in both faster convergence as well as more stable approximation compared to other iterative approaches. It also maps well to AVX vectorization. We compare our implementation with a wide range of alternatives on real and synthetic data. The results demonstrate up to two orders of magnitude of speedup compared to a straightforward SVD implementation and a 1.5-6 times speedup over popular optimized code.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Across computer graphics, vision, robotics and simulation, many applications rely on determining the 3D rotation that aligns two objects or sets of points. The standard solution is to use singular value decomposition (SVD), where the optimal rotation is recovered as the product of the singular vectors. Faster computation of only the rotation is possible using suitable parameterizations of the rotations and iterative optimization. We propose such a method based on the Cayley transformations. The resulting optimization problem allows better local quadratic approximation compared to the Taylor approximation of the exponential map. This results in both faster convergence as well as more stable approximation compared to other iterative approaches. It also maps well to AVX vectorization. We compare our implementation with a wide range of alternatives on real and synthetic data. The results demonstrate up to two orders of magnitude of speedup compared to a straightforward SVD implementation and a 1.5-6 times speedup over popular optimized code. |
Alexa, Marc PolyCover: Shape Approximating With Discrete Surface Orientation (Journal Article) IEEE Computer Graphics and Applications, 41 (3), pp. 85-95, 2021, ISSN: 0272-1716. @article{Alexa:PC:2021, title = {PolyCover: Shape Approximating With Discrete Surface Orientation}, author = {Marc Alexa}, doi = {10.1109/MCG.2021.3060946}, issn = {0272-1716}, year = {2021}, date = {2021-02-22}, journal = {IEEE Computer Graphics and Applications}, volume = {41}, number = {3}, pages = {85-95}, abstract = {We consider the problem of approximating given shapes so that the surface normals are restricted to a prescribed discrete set. Such shape approximations are commonly required in the context of manufacturing shapes. We provide an algorithm that first computes maximal interior polytopes and, then, selects a subset of offsets from the interior polytopes that cover the shape. This provides prescribed Hausdorff error approximations that use only a small number of primitives.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We consider the problem of approximating given shapes so that the surface normals are restricted to a prescribed discrete set. Such shape approximations are commonly required in the context of manufacturing shapes. We provide an algorithm that first computes maximal interior polytopes and, then, selects a subset of offsets from the interior polytopes that cover the shape. This provides prescribed Hausdorff error approximations that use only a small number of primitives. |
2020 |
Alexa, Marc Conforming Weighted Delaunay Triangulation (Journal Article) ACM Transactions on Graphics, 39 (6), pp. 248, 2020. @article{Alexa:CWD:2020, title = {Conforming Weighted Delaunay Triangulation}, author = {Marc Alexa}, url = {https://www.cg.tu-berlin.de/research/projects/cwdt/, Project page https://dl.acm.org/doi/10.1145/3414685.3417776?cid=81100235480, ACM-authorized paper}, doi = {10.1145/3414685.3417776}, year = {2020}, date = {2020-12-06}, journal = {ACM Transactions on Graphics}, volume = {39}, number = {6}, pages = {248}, abstract = {Given a set of points together with a set of simplices we show how to compute weights associated with the points such that the weighted Delaunay triangulation of the point set contains the simplices, if possible. For a given triangulated surface, this process provides a tetrahedral mesh conforming to the triangulation, i.e. solves the problem of meshing the triangulated surface without inserting additional vertices. The restriction to weighted Delaunay triangulations ensures that the orthogonal dual mesh is embedded, facilitating common geometry processing tasks. We show that the existence of a single simplex in a weighted Delaunay triangulation for given vertices amounts to a set of linear inequalities, one for each vertex. This means that the number of inequalities for a given triangle mesh is quadratic in the number of mesh elements, making the naive approach impractical. We devise an algorithm that incrementally selects a small subset of inequalities, repeatedly updating the weights, until the weighted Delaunay triangulation contains all constrained simplices or the problem becomes infeasible. Applying this algorithm to a range of triangle meshes commonly used graphics demonstrates that many of them admit a conforming weighted Delaunay triangulation, in contrast to conforming or constrained Delaunay that require additional vertices to split the input primitives.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Given a set of points together with a set of simplices we show how to compute weights associated with the points such that the weighted Delaunay triangulation of the point set contains the simplices, if possible. For a given triangulated surface, this process provides a tetrahedral mesh conforming to the triangulation, i.e. solves the problem of meshing the triangulated surface without inserting additional vertices. The restriction to weighted Delaunay triangulations ensures that the orthogonal dual mesh is embedded, facilitating common geometry processing tasks. We show that the existence of a single simplex in a weighted Delaunay triangulation for given vertices amounts to a set of linear inequalities, one for each vertex. This means that the number of inequalities for a given triangle mesh is quadratic in the number of mesh elements, making the naive approach impractical. We devise an algorithm that incrementally selects a small subset of inequalities, repeatedly updating the weights, until the weighted Delaunay triangulation contains all constrained simplices or the problem becomes infeasible. Applying this algorithm to a range of triangle meshes commonly used graphics demonstrates that many of them admit a conforming weighted Delaunay triangulation, in contrast to conforming or constrained Delaunay that require additional vertices to split the input primitives. |
Wang, Xi; Ley, Andreas; Koch, Sebastian; Hays, James; Holmqvist, Kenneth; Alexa, Marc Computational discrimination between natural images based on gaze during mental imagery (Journal Article) Scientific Reports, 10 , pp. 13035, 2020, ISSN: 2045-2322. @article{Wang:2020:CD, title = {Computational discrimination between natural images based on gaze during mental imagery}, author = {Xi Wang and Andreas Ley and Sebastian Koch and James Hays and Kenneth Holmqvist and Marc Alexa}, url = {https://rdcu.be/b6tel, Article http://cybertron.cg.tu-berlin.de/xiwang/mental_imagery/retrieval.html, Related Project}, doi = {10.1038/s41598-020-69807-0}, issn = {2045-2322}, year = {2020}, date = {2020-08-03}, journal = {Scientific Reports}, volume = {10}, pages = {13035}, abstract = {When retrieving image from memory, humans usually move their eyes spontaneously as if the image were in front of them. Such eye movements correlate strongly with the spatial layout of the recalled image content and function as memory cues facilitating the retrieval procedure. However, how close the correlation is between imagery eye movements and the eye movements while looking at the original image is unclear so far. In this work we first quantify the similarity of eye movements between recalling an image and encoding the same image, followed by the investigation on whether comparing such pairs of eye movements can be used for computational image retrieval. Our results show that computational image retrieval based on eye movements during spontaneous imagery is feasible. Furthermore, we show that such a retrieval approach can be generalized to unseen images.}, keywords = {}, pubstate = {published}, tppubtype = {article} } When retrieving image from memory, humans usually move their eyes spontaneously as if the image were in front of them. Such eye movements correlate strongly with the spatial layout of the recalled image content and function as memory cues facilitating the retrieval procedure. However, how close the correlation is between imagery eye movements and the eye movements while looking at the original image is unclear so far. In this work we first quantify the similarity of eye movements between recalling an image and encoding the same image, followed by the investigation on whether comparing such pairs of eye movements can be used for computational image retrieval. Our results show that computational image retrieval based on eye movements during spontaneous imagery is feasible. Furthermore, we show that such a retrieval approach can be generalized to unseen images. |
Alexa, Marc; Herholz, Philipp; Kohlbrenner, Maximilian; Sorkine, Olga Properties of Laplace Operators for Tetrahedral Meshes (Journal Article) Computer Graphics Forum, 39 (5), pp. 55-68, 2020. @article{Alexa:2020:PLO, title = {Properties of Laplace Operators for Tetrahedral Meshes}, author = {Marc Alexa and Philipp Herholz and Maximilian Kohlbrenner and Olga Sorkine}, url = {https://igl.ethz.ch/projects/LB3D/, Project Page}, doi = {10.1111/cgf.14068}, year = {2020}, date = {2020-07-06}, journal = {Computer Graphics Forum}, volume = {39}, number = {5}, pages = {55-68}, abstract = {Discrete Laplacians for triangle meshes are a fundamental tool in geometry processing. The so-called cotan Laplacian is widely used since it preserves several important properties of its smooth counterpart. It can be derived from different principles: either considering the piecewise linear nature of the primal elements or associating values to the dual vertices. Both approaches lead to the same operator in the two-dimensional setting. In contrast, for tetrahedral meshes, only the primal construction is reminiscent of the cotan weights, involving dihedral angles. We provide explicit formulas for the lesser-known dual construction. In both cases, the weights can be computed by adding the contributions of individual tetrahedra to an edge. The resulting two different discrete Laplacians for tetrahedral meshes only retain some of the properties of their two-dimensional counterpart. In particular, while both constructions have linear precision, only the primal construction is positive semi-definite and only the dual construction generates positive weights and provides a maximum principle for Delaunay meshes. We perform a range of numerical experiments that highlight the benefits and limitations of the two constructions for different problems and meshes.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Discrete Laplacians for triangle meshes are a fundamental tool in geometry processing. The so-called cotan Laplacian is widely used since it preserves several important properties of its smooth counterpart. It can be derived from different principles: either considering the piecewise linear nature of the primal elements or associating values to the dual vertices. Both approaches lead to the same operator in the two-dimensional setting. In contrast, for tetrahedral meshes, only the primal construction is reminiscent of the cotan weights, involving dihedral angles. We provide explicit formulas for the lesser-known dual construction. In both cases, the weights can be computed by adding the contributions of individual tetrahedra to an edge. The resulting two different discrete Laplacians for tetrahedral meshes only retain some of the properties of their two-dimensional counterpart. In particular, while both constructions have linear precision, only the primal construction is positive semi-definite and only the dual construction generates positive weights and provides a maximum principle for Delaunay meshes. We perform a range of numerical experiments that highlight the benefits and limitations of the two constructions for different problems and meshes. |
2019 |
Jacobs, Jochen; Wang, Xi; Alexa, Marc Keep It Simple: Depth-based Dynamic Adjustment of Rendering for Head-mounted Displays Decreases Visual Comfort (Journal Article) ACM Trans. Appl. Percept. , 16 (3), pp. 16, 2019. @article{DynamicRendering, title = {Keep It Simple: Depth-based Dynamic Adjustment of Rendering for Head-mounted Displays Decreases Visual Comfort}, author = {Jochen Jacobs and Xi Wang and Marc Alexa}, url = {https://dl.acm.org/citation.cfm?id=3353902, Article https://dl.acm.org/authorize?N682026, ACM Authorized Paper }, doi = {10.1145/3353902}, year = {2019}, date = {2019-09-09}, journal = {ACM Trans. Appl. Percept. }, volume = {16}, number = {3}, pages = {16}, abstract = {Head-mounted displays cause discomfort. This is commonly attributed to conflicting depth cues, most prominently between vergence, which is consistent with object depth, and accommodation, which is adjusted to the near eye displays. It is possible to adjust the camera parameters, specifically interocular distance and vergence angles, for rendering the virtual environment to minimize this conflict. This requires dynamic adjustment of the parameters based on object depth. In an experiment based on a visual search task, we evaluate how dynamic adjustment affects visual comfort compared to fixed camera parameters. We collect objective as well as subjective data. Results show that dynamic adjustment decreases common objective measures of visual comfort such as pupil diameter and blink rate by a statistically significant margin. The subjective evaluation of categories such as fatigue or eye irritation shows a similar trend but was inconclusive. This suggests that rendering with fixed camera parameters is the better choice for head-mounted displays, at least in scenarios similar to the ones used here.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Head-mounted displays cause discomfort. This is commonly attributed to conflicting depth cues, most prominently between vergence, which is consistent with object depth, and accommodation, which is adjusted to the near eye displays. It is possible to adjust the camera parameters, specifically interocular distance and vergence angles, for rendering the virtual environment to minimize this conflict. This requires dynamic adjustment of the parameters based on object depth. In an experiment based on a visual search task, we evaluate how dynamic adjustment affects visual comfort compared to fixed camera parameters. We collect objective as well as subjective data. Results show that dynamic adjustment decreases common objective measures of visual comfort such as pupil diameter and blink rate by a statistically significant margin. The subjective evaluation of categories such as fatigue or eye irritation shows a similar trend but was inconclusive. This suggests that rendering with fixed camera parameters is the better choice for head-mounted displays, at least in scenarios similar to the ones used here. |
Wang, Xi; Holmqvist, Kenneth; Alexa, Marc The mean point of vergence is biased under projection (Journal Article) Journal of Eye Movement Research, 12 (4), 2019, ISSN: 1995-8692. @article{jemr2019, title = {The mean point of vergence is biased under projection}, author = {Xi Wang and Kenneth Holmqvist and Marc Alexa}, url = {https://bop.unibe.ch/JEMR/article/view/JEMR.12.4.2}, doi = { https://doi.org/10.16910/jemr.12.4.2}, issn = {1995-8692}, year = {2019}, date = {2019-09-09}, journal = {Journal of Eye Movement Research}, volume = {12}, number = {4}, abstract = {The point of interest in three-dimensional space in eye tracking is often computed based on intersecting the lines of sight with geometry, or finding the point closest to the two lines of sight. We first start by theoretical analysis with synthetic simulations. We show that the mean point of vergence is generally biased for centrally symmetric errors and that the bias depends on the horizontal vs. vertical error distribution of the tracked eye positions. Our analysis continues with an evaluation on real experimental data. The error distributions seem to be different among individuals but they generally leads to the same bias towards the observer. And it tends to be larger with an increased viewing distance. We also provided a recipe to minimize the bias, which applies to general computations of eye ray intersection. These findings not only have implications for choosing the calibration method in eye tracking experiments and interpreting the observed eye movements data; but also suggest to us that we shall consider the mathematical models of calibration as part of the experiment.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The point of interest in three-dimensional space in eye tracking is often computed based on intersecting the lines of sight with geometry, or finding the point closest to the two lines of sight. We first start by theoretical analysis with synthetic simulations. We show that the mean point of vergence is generally biased for centrally symmetric errors and that the bias depends on the horizontal vs. vertical error distribution of the tracked eye positions. Our analysis continues with an evaluation on real experimental data. The error distributions seem to be different among individuals but they generally leads to the same bias towards the observer. And it tends to be larger with an increased viewing distance. We also provided a recipe to minimize the bias, which applies to general computations of eye ray intersection. These findings not only have implications for choosing the calibration method in eye tracking experiments and interpreting the observed eye movements data; but also suggest to us that we shall consider the mathematical models of calibration as part of the experiment. |
Wang, Xi; Holmqvist, Kenneth; Alexa, Marc Computational discrimination between natural images based on gaze during mental imagery (Miscellaneous) Presented at 20th European Conference on Eye Morvements (ECEM), 2019. @misc{ecem19, title = {Computational discrimination between natural images based on gaze during mental imagery}, author = {Xi Wang and Kenneth Holmqvist and Marc Alexa}, url = {http://ecem2019.com/media/attachments/2019/08/27/ecem_abstract_book_updated.pdf}, year = {2019}, date = {2019-08-18}, abstract = {The term “looking-at-nothing” describes the phenomenon that humans move their eyes when looking in front of an empty space. Previous studies showed that eye movements during mental imagery while looking to nothing play a functional role in memory retrieval. However, they are not a reinstatement of the eye movements while looking at the visual stimuli and are generally distorted due to the lack of reference in front of an empty space. So far it remains unclear what the degree of similarity is between eye movements during encoding and eye movements during recall. We studied the mental imagery eye movements while looking at nothing in a lab-controlled experiment. 100 natural images were viewed and recalled by 28 observers, following the standard looking-at-nothing paradigm. We compared the basic characteristics of eye movements during both encoding and recall. Furthermore, we studies the similarity of eye movements between two conditions by asking the question: how visual imagery eye movements can be employed for computational image retrieval. Our results showed that gaze patterns in both conditions can be used to retrieve the corresponding visual stimuli. By utilizing the similarity between gaze patterns during encoding and those during recall, we showed that it is possible to generalize to new images. This study quantitatively compared the similarity between eye movements during looking at the images and those during recall them, and offers a solid method for future studies on the looking-at-nothing phenomenon. }, howpublished = {Presented at 20th European Conference on Eye Morvements (ECEM)}, keywords = {}, pubstate = {published}, tppubtype = {misc} } The term “looking-at-nothing” describes the phenomenon that humans move their eyes when looking in front of an empty space. Previous studies showed that eye movements during mental imagery while looking to nothing play a functional role in memory retrieval. However, they are not a reinstatement of the eye movements while looking at the visual stimuli and are generally distorted due to the lack of reference in front of an empty space. So far it remains unclear what the degree of similarity is between eye movements during encoding and eye movements during recall. We studied the mental imagery eye movements while looking at nothing in a lab-controlled experiment. 100 natural images were viewed and recalled by 28 observers, following the standard looking-at-nothing paradigm. We compared the basic characteristics of eye movements during both encoding and recall. Furthermore, we studies the similarity of eye movements between two conditions by asking the question: how visual imagery eye movements can be employed for computational image retrieval. Our results showed that gaze patterns in both conditions can be used to retrieve the corresponding visual stimuli. By utilizing the similarity between gaze patterns during encoding and those during recall, we showed that it is possible to generalize to new images. This study quantitatively compared the similarity between eye movements during looking at the images and those during recall them, and offers a solid method for future studies on the looking-at-nothing phenomenon. |
Etienne, Jimmy; Ray, Nicolas; Panozzo, Daniele; Hornus, Samuel; Wang, Charlie; Martinez, Jonas; McMains, Sara; Alexa, Marc; Wyvill, Brian; Lefebvre, Sylvain CurviSlicer: slightly curved slicing for 3-axis printers (Journal Article) ACM Transactions on Graphics, 38 (4), 2019. @article{Etienne:2019:CS, title = {CurviSlicer: slightly curved slicing for 3-axis printers}, author = {Jimmy Etienne and Nicolas Ray and Daniele Panozzo and Samuel Hornus and Charlie C. L. Wang and Jonas Martinez and Sara McMains and Marc Alexa and Brian Wyvill and Sylvain Lefebvre}, url = {https://dl.acm.org/authorize?N681473, ACM Authorized Article }, doi = {10.1145/3306346.3323022}, year = {2019}, date = {2019-07-04}, journal = {ACM Transactions on Graphics}, volume = {38}, number = {4}, abstract = {Most additive manufacturing processes fabricate objects by stacking planar layers of solidified material. As a result, produced parts exhibit a so-called staircase effect, which results from sampling slanted surfaces with parallel planes. Using thinner slices reduces this effect, but it always remains visible where layers almost align with the input surfaces. In this research we exploit the ability of some additive manufacturing processes to deposit material slightly out of plane to dramatically reduce these artifacts. We focus in particular on the widespread Fused Filament Fabrication (FFF) technology, since most printers in this category can deposit along slightly curved paths, under deposition slope and thickness constraints. Our algorithm curves the layers, making them either follow the natural slope of the input surface or on the contrary, make them intersect the surfaces at a steeper angle thereby improving the sampling quality. Rather than directly computing curved layers, our algorithm optimizes for a deformation of the model which is then sliced with a standard planar approach. We demonstrate that this approach enables us to encode all fabrication constraints, including the guarantee of generating collision-free toolpaths, in a convex optimization that can be solved using a QP solver. We produce a variety of models and compare print quality between curved deposition and planar slicing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Most additive manufacturing processes fabricate objects by stacking planar layers of solidified material. As a result, produced parts exhibit a so-called staircase effect, which results from sampling slanted surfaces with parallel planes. Using thinner slices reduces this effect, but it always remains visible where layers almost align with the input surfaces. In this research we exploit the ability of some additive manufacturing processes to deposit material slightly out of plane to dramatically reduce these artifacts. We focus in particular on the widespread Fused Filament Fabrication (FFF) technology, since most printers in this category can deposit along slightly curved paths, under deposition slope and thickness constraints. Our algorithm curves the layers, making them either follow the natural slope of the input surface or on the contrary, make them intersect the surfaces at a steeper angle thereby improving the sampling quality. Rather than directly computing curved layers, our algorithm optimizes for a deformation of the model which is then sliced with a standard planar approach. We demonstrate that this approach enables us to encode all fabrication constraints, including the guarantee of generating collision-free toolpaths, in a convex optimization that can be solved using a QP solver. We produce a variety of models and compare print quality between curved deposition and planar slicing. |
Alexa, Marc Harmonic Triangulations (Journal Article) ACM Transactions on Graphics, 38 (4), pp. 54, 2019. @article{Alexa:2019:HT, title = {Harmonic Triangulations}, author = {Marc Alexa}, url = {https://www.cg.tu-berlin.de/harmonic-triangulations/, Project Page https://dl.acm.org/authorize?N688246, ACM Authorized Paper}, doi = {10.1145/3306346.3322986}, year = {2019}, date = {2019-07-03}, journal = {ACM Transactions on Graphics}, volume = {38}, number = {4}, pages = {54}, abstract = {We introduce the notion of harmonic triangulations: a harmonic triangulation simultaneously minimizes the Dirichlet energy of all piecewise linear functions. By a famous result of Rippa, Delaunay triangulations are the harmonic triangulations of planar point sets. We prove by explicit counterexample that in 3D a harmonic triangulation does not exist in general. However, we show that bistellar flips are harmonic: if they decrease Dirichlet energy for one set of function values, they do so for all. This observation gives rise to the notion of locally harmonic triangulations. We demonstrate that locally harmonic triangulations can be efficiently computed, and efficiently reduce sliver tetrahedra. The notion of harmonic triangulation also gives rise to a scalar measure of the quality of a triangulation, which can be used to prioritize flips and optimize the position of vertices. Tetrahedral meshes generated by optimizing this function generally show better quality than Delaunay-based optimization techniques. }, keywords = {}, pubstate = {published}, tppubtype = {article} } We introduce the notion of harmonic triangulations: a harmonic triangulation simultaneously minimizes the Dirichlet energy of all piecewise linear functions. By a famous result of Rippa, Delaunay triangulations are the harmonic triangulations of planar point sets. We prove by explicit counterexample that in 3D a harmonic triangulation does not exist in general. However, we show that bistellar flips are harmonic: if they decrease Dirichlet energy for one set of function values, they do so for all. This observation gives rise to the notion of locally harmonic triangulations. We demonstrate that locally harmonic triangulations can be efficiently computed, and efficiently reduce sliver tetrahedra. The notion of harmonic triangulation also gives rise to a scalar measure of the quality of a triangulation, which can be used to prioritize flips and optimize the position of vertices. Tetrahedral meshes generated by optimizing this function generally show better quality than Delaunay-based optimization techniques. |
Wang, Xi; Ley, Andreas; Koch, Sebastian; Lindlbauer, David; Hays, James; Holmqvist, Kenneth; Alexa, Marc The Mental Image Revealed by Gaze Tracking (Conference) Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19) , ACM, 2019, ISBN: 978-1-4503-5970-2. @conference{mentalImg, title = {The Mental Image Revealed by Gaze Tracking}, author = {Xi Wang and Andreas Ley and Sebastian Koch and David Lindlbauer and James Hays and Kenneth Holmqvist and Marc Alexa}, url = {http://cybertron.cg.tu-berlin.de/xiwang/mental_imagery/retrieval.html, Project page https://dl.acm.org/authorize?N681045, ACM Authorized Paper }, doi = {10.1145/3290605.3300839}, isbn = {978-1-4503-5970-2}, year = {2019}, date = {2019-05-04}, booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19) }, publisher = {ACM}, abstract = {Humans involuntarily move their eyes when retrieving an image from memory. This motion is often similar to actually observing the image. We suggest to exploit this behavior as a new modality in human computer interaction, using the motion of the eyes as a descriptor of the image. Interaction requires the user's eyes to be tracked but no voluntary physical activity. We perform a controlled experiment and develop matching techniques using machine learning to investigate if images can be discriminated based on the gaze patterns recorded while users merely think about image. Our results indicate that image retrieval is possible with an accuracy significantly above chance. We also show that this result generalizes to images not used during training of the classifier and extends to uncontrolled settings in a realistic scenario.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Humans involuntarily move their eyes when retrieving an image from memory. This motion is often similar to actually observing the image. We suggest to exploit this behavior as a new modality in human computer interaction, using the motion of the eyes as a descriptor of the image. Interaction requires the user's eyes to be tracked but no voluntary physical activity. We perform a controlled experiment and develop matching techniques using machine learning to investigate if images can be discriminated based on the gaze patterns recorded while users merely think about image. Our results indicate that image retrieval is possible with an accuracy significantly above chance. We also show that this result generalizes to images not used during training of the classifier and extends to uncontrolled settings in a realistic scenario. |
Koch, Sebastian; Matveev, Albert; Jiang, Zhongshi; Artemov, Francis Williams Alexey; Burnaev, Evgeny; Alexa, Marc; Zorin, Denis; Panozzo, Daniele ABC: A Big CAD Model Dataset for Geometric Deep Learning (Conference) Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2019. @conference{Koch:2019:ABC, title = {ABC: A Big CAD Model Dataset for Geometric Deep Learning}, author = {Sebastian Koch and Albert Matveev and Zhongshi Jiang and Francis Williams Alexey Artemov and Evgeny Burnaev and Marc Alexa and Denis Zorin and Daniele Panozzo}, url = {http://openaccess.thecvf.com/content_CVPR_2019/html/Koch_ABC_A_Big_CAD_Model_Dataset_for_Geometric_Deep_Learning_CVPR_2019_paper.html, Article at CVF}, year = {2019}, date = {2019-05-01}, booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, pages = {9601--9611}, publisher = {IEEE}, abstract = {We introduce ABC-Dataset, a collection of one million Computer-Aided Design (CAD) models for research of geometric deep learning methods and applications. Each model is a collection of explicitly parametrized curves and surfaces, providing ground truth for differential quantities, patch segmentation, geometric feature detection, and shape reconstruction. Sampling the parametric descriptions of surfaces and curves allows generating data in different formats and resolutions, enabling fair comparisons for a wide range of geometric learning algorithms. As a use case for our dataset, we perform a large-scale benchmark for estimation of surface normals, comparing existing data driven methods and evaluating their performance against both the ground truth and traditional normal estimation methods.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } We introduce ABC-Dataset, a collection of one million Computer-Aided Design (CAD) models for research of geometric deep learning methods and applications. Each model is a collection of explicitly parametrized curves and surfaces, providing ground truth for differential quantities, patch segmentation, geometric feature detection, and shape reconstruction. Sampling the parametric descriptions of surfaces and curves allows generating data in different formats and resolutions, enabling fair comparisons for a wide range of geometric learning algorithms. As a use case for our dataset, we perform a large-scale benchmark for estimation of surface normals, comparing existing data driven methods and evaluating their performance against both the ground truth and traditional normal estimation methods. |
Ion, Alexandra; Lindlbauer, David; Herholz, Philipp; Alexa, Marc; Baudisch, Patrick Understanding Metamaterial Mechanisms (Conference) Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19), ACM, 2019. @conference{Ion:UMM:2019, title = {Understanding Metamaterial Mechanisms}, author = {Alexandra Ion and David Lindlbauer and Philipp Herholz and Marc Alexa and Patrick Baudisch}, url = {https://dl.acm.org/authorize?N681474, ACM Authorized Paper}, doi = {10.1145/3290605.3300877}, year = {2019}, date = {2019-05-01}, booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19)}, publisher = {ACM}, abstract = {In this paper, we establish the underlying foundations of mechanisms that are composed of cell structures---known as metamaterial mechanisms. Such metamaterial mechanisms were previously shown to implement complete mechanisms in the cell structure of a 3D printed material, without the need for assembly. However, their design is highly challenging. A mechanism consists of many cells that are interconnected and impose constraints on each other. This leads to unobvious and non-linear behavior of the mechanism, which impedes user design. In this work, we investigate the underlying topological constraints of such cell structures and their influence on the resulting mechanism. Based on these findings, we contribute a computational design tool that automatically creates a metamaterial mechanism from user-defined motion paths. This tool is only feasible because our novel abstract representation of the global constraints highly reduces the search space of possible cell arrangements.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } In this paper, we establish the underlying foundations of mechanisms that are composed of cell structures---known as metamaterial mechanisms. Such metamaterial mechanisms were previously shown to implement complete mechanisms in the cell structure of a 3D printed material, without the need for assembly. However, their design is highly challenging. A mechanism consists of many cells that are interconnected and impose constraints on each other. This leads to unobvious and non-linear behavior of the mechanism, which impedes user design. In this work, we investigate the underlying topological constraints of such cell structures and their influence on the resulting mechanism. Based on these findings, we contribute a computational design tool that automatically creates a metamaterial mechanism from user-defined motion paths. This tool is only feasible because our novel abstract representation of the global constraints highly reduces the search space of possible cell arrangements. |
Herholz, Philipp; Alexa, Marc Efficient Computation of Smoothed Exponential Maps (Journal Article) Computer Graphics Forum, 38 , pp. 79–90, 2019. @article{Herholz:2019:CGF, title = {Efficient Computation of Smoothed Exponential Maps}, author = {Philipp Herholz and Marc Alexa}, doi = {10.1111/cgf.13607}, year = {2019}, date = {2019-03-14}, journal = {Computer Graphics Forum}, volume = {38}, pages = {79--90}, abstract = {Many applications in geometry processing require the computation of local parameterizations on a surface mesh at interactive rates. A popular approach is to compute local exponential maps, i.e. parameterizations that preserve distance and angle to the origin of the map. We extend the computation of geodesic distance by heat diffusion to also determine angular information for the geodesic curves. This approach has two important benefits compared to fast approximate as well as exact forward tracing of the distance function: First, it allows generating smoother maps, avoiding discontinuities. Second, exploiting the factorization of the global Laplace–Beltrami operator of the mesh and using recent localized solution techniques, the computation is more efficient even compared to fast approximate solutions based on Dijkstra's algorithm.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Many applications in geometry processing require the computation of local parameterizations on a surface mesh at interactive rates. A popular approach is to compute local exponential maps, i.e. parameterizations that preserve distance and angle to the origin of the map. We extend the computation of geodesic distance by heat diffusion to also determine angular information for the geodesic curves. This approach has two important benefits compared to fast approximate as well as exact forward tracing of the distance function: First, it allows generating smoother maps, avoiding discontinuities. Second, exploiting the factorization of the global Laplace–Beltrami operator of the mesh and using recent localized solution techniques, the computation is more efficient even compared to fast approximate solutions based on Dijkstra's algorithm. |
Wang, Xi; Chern, Albert; Alexa, Marc Center of circle after perspective transformation (Online) arXiv preprint arXiv:1902.04541 2019. @online{concentricCircles, title = {Center of circle after perspective transformation}, author = {Xi Wang and Albert Chern and Marc Alexa}, url = {https://arxiv.org/abs/1902.04541}, year = {2019}, date = {2019-02-12}, organization = {arXiv preprint arXiv:1902.04541 }, abstract = {Video-based glint-free eye tracking commonly estimates gaze direction based on the pupil center. The boundary of the pupil is fitted with an ellipse and the euclidean center of the ellipse in the image is taken as the center of the pupil. However, the center of the pupil is generally not mapped to the center of the ellipse by the projective camera transformation. This error resulting from using a point that is not the true center of the pupil directly affects eye tracking accuracy. We investigate the underlying geometric problem of determining the center of a circular object based on its projective image. The main idea is to exploit two concentric circles -- in the application scenario these are the pupil and the iris. We show that it is possible to computed the center and the ratio of the radii from the mapped concentric circles with a direct method that is fast and robust in practice. We evaluate our method on synthetically generated data and find that it improves systematically over using the center of the fitted ellipse. Apart from applications of eye tracking we estimate that our approach will be useful in other tracking applications.}, keywords = {}, pubstate = {published}, tppubtype = {online} } Video-based glint-free eye tracking commonly estimates gaze direction based on the pupil center. The boundary of the pupil is fitted with an ellipse and the euclidean center of the ellipse in the image is taken as the center of the pupil. However, the center of the pupil is generally not mapped to the center of the ellipse by the projective camera transformation. This error resulting from using a point that is not the true center of the pupil directly affects eye tracking accuracy. We investigate the underlying geometric problem of determining the center of a circular object based on its projective image. The main idea is to exploit two concentric circles -- in the application scenario these are the pupil and the iris. We show that it is possible to computed the center and the ratio of the radii from the mapped concentric circles with a direct method that is fast and robust in practice. We evaluate our method on synthetically generated data and find that it improves systematically over using the center of the fitted ellipse. Apart from applications of eye tracking we estimate that our approach will be useful in other tracking applications. |
2018 |
Wang, Xi; Koch, Sebastian; Holmqvist, Kenneth; Alexa, Marc Tracking the Gaze on Objects in 3D: How do People Really Look at the Bunny? (Journal Article) ACM Transactions on Graphics, 37 (6), 2018. @article{Gaze3D, title = {Tracking the Gaze on Objects in 3D: How do People Really Look at the Bunny?}, author = {Xi Wang and Sebastian Koch and Kenneth Holmqvist and Marc Alexa}, url = {http://cybertron.cg.tu-berlin.de/xiwang/project_saliency/3D_dataset.html, Project page https://dl.acm.org/authorize?N688247, ACM Authorized Paper}, doi = {10.1145/3272127.3275094}, year = {2018}, date = {2018-12-03}, booktitle = {ACM Transaction on Graphics (Proc. of Siggraph Asia)}, journal = {ACM Transactions on Graphics}, volume = {37}, number = {6}, publisher = {ACM}, abstract = {We provide the first large dataset of human fixations on physical 3D objects presented in varying viewing conditions and made of different materials. Our experimental setup is carefully designed to allow for accurate calibration and measurement. We estimate a mapping from the pair of pupil positions to 3D coordinates in space and register the presented shape with the eye tracking setup. By modeling the fixated positions on 3D shapes as a probability distribution, we analysis the similarities among different conditions. The resulting data indicates that salient features depend on the viewing direction. Stable features across different viewing directions seem to be connected to semantically meaningful parts. We also show that it is possible to estimate the gaze density maps from view dependent data. The dataset provides the necessary ground truth data for computational models of human perception in 3D.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We provide the first large dataset of human fixations on physical 3D objects presented in varying viewing conditions and made of different materials. Our experimental setup is carefully designed to allow for accurate calibration and measurement. We estimate a mapping from the pair of pupil positions to 3D coordinates in space and register the presented shape with the eye tracking setup. By modeling the fixated positions on 3D shapes as a probability distribution, we analysis the similarities among different conditions. The resulting data indicates that salient features depend on the viewing direction. Stable features across different viewing directions seem to be connected to semantically meaningful parts. We also show that it is possible to estimate the gaze density maps from view dependent data. The dataset provides the necessary ground truth data for computational models of human perception in 3D. |
Herholz, Philipp; Alexa, Marc Factor Once: Reusing Cholesky Factorizations on Sub-Meshes (Journal Article) ACM Transaction on Graphics, 37 (6), 2018. @article{Herholz:2018, title = {Factor Once: Reusing Cholesky Factorizations on Sub-Meshes}, author = {Philipp Herholz and Marc Alexa}, url = {https://dl.acm.org/authorize?N688248, ACM Authorized Paper}, doi = {10.1145/3272127.3275107}, year = {2018}, date = {2018-12-01}, booktitle = {ACM Transaction on Graphics (Proc. of Siggraph Asia)}, journal = {ACM Transaction on Graphics}, volume = {37}, number = {6}, publisher = {ACM}, abstract = {A common operation in geometry processing is solving symmetric and positive semi-definite systems on a subset of a mesh with conditions for the vertices at the boundary of the region. This is commonly done by setting up the linear system for the sub-mesh, factorizing the system (potentially applying preordering to improve sparseness of the factors), and then solving by back-substitution. This approach suffers from a comparably high setup cost for each local operation. We propose to reuse factorizations defined on the full mesh to solve linear problems on sub-meshes. We show how an update on sparse matrices can be performed in a particularly efficient way to obtain the factorization of the operator on a sun-mesh significantly outperforming general factor updates and complete refactorization. We analyze the resulting speedup for a variety of situations and demonstrate that our method outperforms factorization of a new matrix by a factor of up to 10 while never being slower in our experiments.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A common operation in geometry processing is solving symmetric and positive semi-definite systems on a subset of a mesh with conditions for the vertices at the boundary of the region. This is commonly done by setting up the linear system for the sub-mesh, factorizing the system (potentially applying preordering to improve sparseness of the factors), and then solving by back-substitution. This approach suffers from a comparably high setup cost for each local operation. We propose to reuse factorizations defined on the full mesh to solve linear problems on sub-meshes. We show how an update on sparse matrices can be performed in a particularly efficient way to obtain the factorization of the operator on a sun-mesh significantly outperforming general factor updates and complete refactorization. We analyze the resulting speedup for a variety of situations and demonstrate that our method outperforms factorization of a new matrix by a factor of up to 10 while never being slower in our experiments. |
Wang, Xi; Holmqvist, Kenneth; Alexa, Marc Maps of Visual Importance: What is recalled from visual episodic memory? (Miscellaneous) Presented at 41st European Conference on Visual Perception (ECVP), 2018. @misc{Eyetrackingb, title = {Maps of Visual Importance: What is recalled from visual episodic memory?}, author = {Xi Wang and Kenneth Holmqvist and Marc Alexa}, url = {http://cybertron.cg.tu-berlin.de/xiwang/files/ecvp18.pdf}, year = {2018}, date = {2018-08-26}, abstract = {It has been shown that not all fixated locations in a scene are encoded in visual memory. We propose a new way to probe experimentally whether the scene content corresponding to a fixation was considered important by the observer. Our protocol is based on findings from mental imagery showing that fixation locations are reenacted during recall. We track observers' eye movements during stimulus presentation and subsequently, observers are asked to recall the visual content while looking at a neutral background. The tracked gaze locations from the two conditions are aligned using an novel elastic matching algorithm. Motivated by the hypothesis that visual content is recalled only if it has been encoded, we filter fixations from the presentation phase based on fixation locations from recall. The resulting density maps encode fixated scene elements that observers remembered, indicating importance of scene elements. We find that these maps contain topdown rather than bottom-up features.}, howpublished = {Presented at 41st European Conference on Visual Perception (ECVP)}, keywords = {}, pubstate = {published}, tppubtype = {misc} } It has been shown that not all fixated locations in a scene are encoded in visual memory. We propose a new way to probe experimentally whether the scene content corresponding to a fixation was considered important by the observer. Our protocol is based on findings from mental imagery showing that fixation locations are reenacted during recall. We track observers' eye movements during stimulus presentation and subsequently, observers are asked to recall the visual content while looking at a neutral background. The tracked gaze locations from the two conditions are aligned using an novel elastic matching algorithm. Motivated by the hypothesis that visual content is recalled only if it has been encoded, we filter fixations from the presentation phase based on fixation locations from recall. The resulting density maps encode fixated scene elements that observers remembered, indicating importance of scene elements. We find that these maps contain topdown rather than bottom-up features. |
Piovarci, Michal; Wessely, Michael; Jagielski, Michal; Alexa, Marc; Matusik, Wojciech; Didyk, Piotr Design and analysis of directional front projection screens (Journal Article) Computers & Graphics, 74 , pp. 213-224, 2018, ISSN: 0097-8493. @article{Piovarci:2018:DAD, title = {Design and analysis of directional front projection screens}, author = {Michal Piovarci and Michael Wessely and Michal Jagielski and Marc Alexa and Wojciech Matusik and Piotr Didyk}, doi = {https://doi.org/10.1016/j.cag.2018.05.010}, issn = {0097-8493}, year = {2018}, date = {2018-08-01}, journal = {Computers & Graphics}, volume = {74}, pages = {213-224}, keywords = {}, pubstate = {published}, tppubtype = {article} } |
Lindlbauer, David; Wilson, Andy Remixed Reality: Manipulating Space and Time in Augmented Reality (Inproceeding) CHI 2018, ACM, 2018, ISBN: 978-1-4503-5620-6. @inproceedings{Lindlbauer2018, title = {Remixed Reality: Manipulating Space and Time in Augmented Reality}, author = {David Lindlbauer and Andy D. Wilson}, url = {https://dl.acm.org/authorize?N653481, Paper https://www.youtube.com/watch?v=BjhaZi1l-hY, Preview video https://www.youtube.com/watch?v=GoSQTPfrdCc, Video}, doi = {10.1145/3173574.3173703}, isbn = {978-1-4503-5620-6}, year = {2018}, date = {2018-04-22}, booktitle = {CHI 2018}, publisher = {ACM}, series = {CHI'18}, abstract = {We present Remixed Reality, a novel form of mixed reality. In contrast to classical mixed reality approaches where users see a direct view or video feed of their environment, with Remixed Reality they see a live 3D reconstruction, gathered from multiple external depth cameras. This approach enables changing the environment as easily as geometry can be changed in virtual reality, while allowing users to view and interact with the actual physical world as they would in augmented reality. We characterize a taxonomy of manipulations that are possible with Remixed Reality: spatial changes such as erasing objects; appearance changes such as changing textures; temporal changes such as pausing time; and viewpoint changes that allow users to see the world from different points without changing their physical location. We contribute a method that uses an underlying voxel grid holding information like visibility and transformations, which is applied to live geometry in real time.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We present Remixed Reality, a novel form of mixed reality. In contrast to classical mixed reality approaches where users see a direct view or video feed of their environment, with Remixed Reality they see a live 3D reconstruction, gathered from multiple external depth cameras. This approach enables changing the environment as easily as geometry can be changed in virtual reality, while allowing users to view and interact with the actual physical world as they would in augmented reality. We characterize a taxonomy of manipulations that are possible with Remixed Reality: spatial changes such as erasing objects; appearance changes such as changing textures; temporal changes such as pausing time; and viewpoint changes that allow users to see the world from different points without changing their physical location. We contribute a method that uses an underlying voxel grid holding information like visibility and transformations, which is applied to live geometry in real time. |
Andreas Fender Philipp Herholz, Marc Alexa; Müller, Jörg OptiSpace: Automated Placement of Interactive 3D Projection Mapping Content (Inproceeding) Conference on Human Factors in Computing Systems, CHI'18, ACM, 2018, ISBN: 978-1-4503-5620-6. @inproceedings{Fender2018, title = {OptiSpace: Automated Placement of Interactive 3D Projection Mapping Content}, author = {Andreas Fender, Philipp Herholz, Marc Alexa, and Jörg Müller}, url = {https://www.youtube.com/watch?v=Z74Zfyz_mP4, Preview video https://www.youtube.com/watch?v=YXhN9N_M4Bg, Video}, doi = {10.1145/3173574.3173843}, isbn = {978-1-4503-5620-6}, year = {2018}, date = {2018-04-22}, booktitle = {Conference on Human Factors in Computing Systems, CHI'18}, publisher = {ACM}, series = {CHI'18}, abstract = {We present OptiSpace, a system for the automated placement of perspectively corrected projection mapping content. We analyze the geometry of physical surfaces and the viewing behavior of users over time using depth cameras. Our system measures user view behavior and simulates a virtual projection mapping scene users would see if content were placed in a particular way. OptiSpace evaluates the simulated scene according to perceptual criteria, including visibility and visual quality of virtual content. Finally, based on these evaluations, it optimizes content placement, using a two-phase procedure involving adaptive sampling and the covariance matrix adaptation algorithm. With our proposed architecture, projection mapping applications are developed without any knowledge of the physical layouts of the target environments. Applications can be deployed in different uncontrolled environments, such as living rooms and office spaces.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We present OptiSpace, a system for the automated placement of perspectively corrected projection mapping content. We analyze the geometry of physical surfaces and the viewing behavior of users over time using depth cameras. Our system measures user view behavior and simulates a virtual projection mapping scene users would see if content were placed in a particular way. OptiSpace evaluates the simulated scene according to perceptual criteria, including visibility and visual quality of virtual content. Finally, based on these evaluations, it optimizes content placement, using a two-phase procedure involving adaptive sampling and the covariance matrix adaptation algorithm. With our proposed architecture, projection mapping applications are developed without any knowledge of the physical layouts of the target environments. Applications can be deployed in different uncontrolled environments, such as living rooms and office spaces. |
2017 |
Wang, Xi; Alexa, Marc Maps of Visual Importance (Online) arXiv preprint arXiv:1712.02142 2017. @online{Visualimportance,Mentalimagery,Eyetracking, title = {Maps of Visual Importance}, author = {Xi Wang and Marc Alexa}, url = {https://arxiv.org/abs/1712.02142}, year = {2017}, date = {2017-12-06}, organization = {arXiv preprint arXiv:1712.02142}, abstract = {The importance of an element in a visual stimulus is commonly associated with the fixations during a free-viewing task. We argue that fixations are not always correlated with attention or awareness of visual objects. We suggest to filter the fixations recorded during exploration of the image based on the fixations recorded during recalling the image against a neutral background. This idea exploits that eye movements are a spatial index into the memory of a visual stimulus. We perform an experiment in which we record the eye movements of 30 observers during the presentation and recollection of 100 images. The locations of fixations during recall are only qualitatively related to the fixations during exploration. We develop a deformation mapping technique to align the fixations from recall with the fixation during exploration. This allows filtering the fixations based on proximity and a threshold on proximity provides a convenient slider to control the amount of filtering. Analyzing the spatial histograms resulting from the filtering procedure as well as the set of removed fixations shows that certain types of scene elements, which could be considered irrelevant, are removed. In this sense, they provide a measure of importance of visual elements for human observers.}, keywords = {}, pubstate = {published}, tppubtype = {online} } The importance of an element in a visual stimulus is commonly associated with the fixations during a free-viewing task. We argue that fixations are not always correlated with attention or awareness of visual objects. We suggest to filter the fixations recorded during exploration of the image based on the fixations recorded during recalling the image against a neutral background. This idea exploits that eye movements are a spatial index into the memory of a visual stimulus. We perform an experiment in which we record the eye movements of 30 observers during the presentation and recollection of 100 images. The locations of fixations during recall are only qualitatively related to the fixations during exploration. We develop a deformation mapping technique to align the fixations from recall with the fixation during exploration. This allows filtering the fixations based on proximity and a threshold on proximity provides a convenient slider to control the amount of filtering. Analyzing the spatial histograms resulting from the filtering procedure as well as the set of removed fixations shows that certain types of scene elements, which could be considered irrelevant, are removed. In this sense, they provide a measure of importance of visual elements for human observers. |
Herholz, Philipp; Davis, Timothy; Alexa, Marc Localized solutions of sparse linear systems for geometry processing (Journal Article) ACM Transactions on Graphics, 36 (6), 2017. @article{Herholz:2017, title = {Localized solutions of sparse linear systems for geometry processing}, author = {Philipp Herholz and Timothy A. Davis and Marc Alexa }, url = {https://dl.acm.org/authorize?N668657, ACM Authorized Paper}, doi = {10.1145/3130800.3130849}, year = {2017}, date = {2017-11-06}, booktitle = {ACM Transactions on Graphics (TOG) }, journal = {ACM Transactions on Graphics}, volume = {36}, number = {6}, publisher = {ACM}, abstract = {Computing solutions to linear systems is a fundamental building block of many geometry processing algorithms. In many cases the Cholesky factorization of the system matrix is computed to subsequently solve the system, possibly for many right-hand sides, using forward and back substitution. We demonstrate how to exploit sparsity in both the right-hand side and the set of desired solution values to obtain significant speedups. The method is easy to implement and potentially useful in any scenarios where linear problems have to be solved locally. We show that this technique is useful for geometry processing operations, in particular we consider the solution of diffusion problems. All problems profit significantly from sparse computations in terms of runtime, which we demonstrate by providing timings for a set of numerical experiments.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Computing solutions to linear systems is a fundamental building block of many geometry processing algorithms. In many cases the Cholesky factorization of the system matrix is computed to subsequently solve the system, possibly for many right-hand sides, using forward and back substitution. We demonstrate how to exploit sparsity in both the right-hand side and the set of desired solution values to obtain significant speedups. The method is easy to implement and potentially useful in any scenarios where linear problems have to be solved locally. We show that this technique is useful for geometry processing operations, in particular we consider the solution of diffusion problems. All problems profit significantly from sparse computations in terms of runtime, which we demonstrate by providing timings for a set of numerical experiments. |
Fender, Andreas; Lindlbauer, David; Herholz, Philipp; Alexa, Marc; Müller, Jörg HeatSpace: Automatic Placement of Displays by Empirical Analysis of User Behavior (Inproceeding) ACM Symposium on User Interface Software and Technology, UIST'17, pp. 611-621 , ACM, 2017, ISBN: 978-1-4503-4981-9. @inproceedings{Fender2017, title = {HeatSpace: Automatic Placement of Displays by Empirical Analysis of User Behavior}, author = {Andreas Fender and David Lindlbauer and Philipp Herholz and Marc Alexa and Jörg Müller}, url = {http://dl.acm.org/authorize?N40687, Paper https://www.youtube.com/watch?v=9IQFY_fNz_w, Preview video https://www.youtube.com/watch?v=pSZHUseWtj4, Video}, doi = {10.1145/3126594.3126621}, isbn = {978-1-4503-4981-9}, year = {2017}, date = {2017-10-25}, booktitle = {ACM Symposium on User Interface Software and Technology, UIST'17}, pages = {611-621 }, publisher = {ACM}, series = {UIST'17}, abstract = {We present HeatSpace, a system that records and empirically analyzes user behavior in a space and automatically suggests positions and sizes for new displays. The system uses depth cameras to capture 3D geometry and users' perspectives over time. To derive possible display placements, it calculates volumetric heatmaps describing geometric persistence and planarity of structures inside the space. It evaluates visibility of display poses by calculating a volumetric heatmap describing occlusions, position within users' field of view, and viewing angle. Optimal display size is calculated through a heatmap of average viewing distance. Based on the heatmaps and user constraints we sample the space of valid display placements and jointly optimize their positions. This can be useful when installing displays in multi-display environments such as meeting rooms, offices, and train stations.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We present HeatSpace, a system that records and empirically analyzes user behavior in a space and automatically suggests positions and sizes for new displays. The system uses depth cameras to capture 3D geometry and users' perspectives over time. To derive possible display placements, it calculates volumetric heatmaps describing geometric persistence and planarity of structures inside the space. It evaluates visibility of display poses by calculating a volumetric heatmap describing occlusions, position within users' field of view, and viewing angle. Optimal display size is calculated through a heatmap of average viewing distance. Based on the heatmaps and user constraints we sample the space of valid display placements and jointly optimize their positions. This can be useful when installing displays in multi-display environments such as meeting rooms, offices, and train stations. |
Lindlbauer, David Optically Dynamic Interfaces (Inproceeding) Adjunct Publication of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST'17 Adjunct, pp. 107-110 , ACM, 2017, ISBN: 978-1-4503-5419-6. @inproceedings{Lindlbauer17ODI, title = {Optically Dynamic Interfaces}, author = {David Lindlbauer}, url = {http://dl.acm.org/authorize?N40688, Paper}, doi = {10.1145/3131785.3131840}, isbn = {978-1-4503-5419-6}, year = {2017}, date = {2017-10-22}, booktitle = {Adjunct Publication of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST'17 Adjunct}, pages = {107-110 }, publisher = {ACM}, series = {UIST'17 Adjunct}, abstract = {In the virtual world, changing properties of objects such as their color, size or shape is one of the main means of communication. Objects are hidden or revealed when needed, or undergo changes in color or size to communicate importance. I am interested in how these features can be brought into the real world by modifying the optical properties of physical objects and devices, and how this dynamic appearance influences interaction and behavior. The interplay of creating functional prototypes of interactive artifacts and devices, and studying them in controlled experiments forms the basis of my research. During my research I created a three level model describing how physical artifacts and interfaces can be appropriated to allow for dynamic appearance: (1) dynamic objects, (2) augmented objects, and (3) augmented surroundings. This position paper outlines these three levels and details instantiations of each level that were created in the context of this thesis research.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In the virtual world, changing properties of objects such as their color, size or shape is one of the main means of communication. Objects are hidden or revealed when needed, or undergo changes in color or size to communicate importance. I am interested in how these features can be brought into the real world by modifying the optical properties of physical objects and devices, and how this dynamic appearance influences interaction and behavior. The interplay of creating functional prototypes of interactive artifacts and devices, and studying them in controlled experiments forms the basis of my research. During my research I created a three level model describing how physical artifacts and interfaces can be appropriated to allow for dynamic appearance: (1) dynamic objects, (2) augmented objects, and (3) augmented surroundings. This position paper outlines these three levels and details instantiations of each level that were created in the context of this thesis research. |
Wang, Xi; Maertens, Marianne; Alexa, Marc 3D Eye Tracking in Monocular and Binocular Conditions (Miscellaneous) Presented at 19th European Conference on Eye Morvements (ECEM), 2017. @misc{Eyetracking, title = {3D Eye Tracking in Monocular and Binocular Conditions}, author = {Xi Wang and Marianne Maertens and Marc Alexa}, url = {https://social.hse.ru/data/2017/10/26/1157724079/ECEM_Booklet.pdf}, year = {2017}, date = {2017-08-20}, abstract = {Results of eye tracking experiments on vergence are contradictory: for example, the point of vergence has been found in front of as well as behind the target location. The point of vergence is computed by intersecting two lines associated to pupil positions. This approach requires that a fixed eye position corresponds to a straight line of targets in space. However, as long as the targets in an experiment are distributed on a surface (e.g. a monitor), the straight-line assumption cannot be validated; inconsistencies would be hidden in the model estimated during calibration procedure. We have developed an experimental setup for 3D eye tracking based on fiducial markers, whose positions are estimated using computer vision techniques. This allows us to map points in 3D space to pupil positions and, thus, test the straight-line hypothesis. In the experiment, we test both monocular and binocular viewing conditions. Preliminary results suggest that a) the monocular condition is consistent with the straight-line hypothesis and b) binocular viewing shows disparity under the monocular straight line model. This implies that binocular calibration is unsuitable for experiments about vergence. Further analysis is developing a consistent model of binocular viewing.}, howpublished = {Presented at 19th European Conference on Eye Morvements (ECEM)}, keywords = {}, pubstate = {published}, tppubtype = {misc} } Results of eye tracking experiments on vergence are contradictory: for example, the point of vergence has been found in front of as well as behind the target location. The point of vergence is computed by intersecting two lines associated to pupil positions. This approach requires that a fixed eye position corresponds to a straight line of targets in space. However, as long as the targets in an experiment are distributed on a surface (e.g. a monitor), the straight-line assumption cannot be validated; inconsistencies would be hidden in the model estimated during calibration procedure. We have developed an experimental setup for 3D eye tracking based on fiducial markers, whose positions are estimated using computer vision techniques. This allows us to map points in 3D space to pupil positions and, thus, test the straight-line hypothesis. In the experiment, we test both monocular and binocular viewing conditions. Preliminary results suggest that a) the monocular condition is consistent with the straight-line hypothesis and b) binocular viewing shows disparity under the monocular straight line model. This implies that binocular calibration is unsuitable for experiments about vergence. Further analysis is developing a consistent model of binocular viewing. |
Piovarci, Michal; Wessely, Michael; Jagielski, Michal; Alexa, Marc; Matusik, Wojciech; Didyk, Piotr Directional Screens (Inproceeding) Proceedings of the 1st Annual ACM Symposium on Computational Fabrication, pp. 1:1–1:10, ACM, New York, NY, USA, 2017, ISBN: 978-1-4503-4999-4. @inproceedings{Piovarci:2017:DS, title = {Directional Screens}, author = {Michal Piovarci and Michael Wessely and Michal Jagielski and Marc Alexa and Wojciech Matusik and Piotr Didyk}, url = {https://dl.acm.org/authorize?N655725, Paper (ACM Authorized)}, doi = {10.1145/3083157.3083162}, isbn = {978-1-4503-4999-4}, year = {2017}, date = {2017-06-12}, booktitle = {Proceedings of the 1st Annual ACM Symposium on Computational Fabrication}, pages = {1:1--1:10}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {The goal of display and screen manufacturers is to design devices or surfaces that maximize the perceived image quality, e.g., resolution, brightness, and color reproduction. Very often, a particular viewer location is not taken into account, and the quality is maximized across all viewing directions. This, however, has significant implications for energy efficiency. There is usually a very wide range of viewing directions (e.g., ceiling, floor, or walls) for which the displayed content does not need to be provided. Ignoring this fact results in energy waste due to a significant amount of light reflected towards these regions. In our work, we propose a new type of screen - directional screens, which can be customized depending on a specific audience layout. They can provide up to 5 times increased gain when compared to high-gain screens and up to 15 times brighter reflection than a matte screen. In addition, they provide uniform brightness across all viewing directions, which addresses the problem of "hot-spotting" in high-gain screens. The key idea of our approach is to build a front-projection screen from tiny, highly reflective surfaces. Each of these surfaces is carefully designed so that it reflects the light only towards the audience. In this paper, we propose a complete process for designing and manufacturing such screens. We also validate our concept in simulations and by fabricating several fragments of big screens.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } The goal of display and screen manufacturers is to design devices or surfaces that maximize the perceived image quality, e.g., resolution, brightness, and color reproduction. Very often, a particular viewer location is not taken into account, and the quality is maximized across all viewing directions. This, however, has significant implications for energy efficiency. There is usually a very wide range of viewing directions (e.g., ceiling, floor, or walls) for which the displayed content does not need to be provided. Ignoring this fact results in energy waste due to a significant amount of light reflected towards these regions. In our work, we propose a new type of screen - directional screens, which can be customized depending on a specific audience layout. They can provide up to 5 times increased gain when compared to high-gain screens and up to 15 times brighter reflection than a matte screen. In addition, they provide uniform brightness across all viewing directions, which addresses the problem of "hot-spotting" in high-gain screens. The key idea of our approach is to build a front-projection screen from tiny, highly reflective surfaces. Each of these surfaces is carefully designed so that it reflects the light only towards the audience. In this paper, we propose a complete process for designing and manufacturing such screens. We also validate our concept in simulations and by fabricating several fragments of big screens. |
Lindlbauer, David; Müller, Jörg; Alexa, Marc Changing the Appearance of Real-World Objects by Modifying Their Surroundings (Inproceeding) ACM CHI Conference on Human Factors in Computing Systems, CHI'17, pp. 3954-3965, ACM, 2017, ISBN: 978-1-4503-4655-9/17/05. @inproceedings{Lindlbauer2017, title = {Changing the Appearance of Real-World Objects by Modifying Their Surroundings}, author = {David Lindlbauer and Jörg Müller and Marc Alexa}, url = {http://dl.acm.org/authorize?N37918, Paper https://www.cg.tu-berlin.de/research/projects/illusionary-interfaces, Project website https://www.youtube.com/watch?v=bxWZP_m3PQQ, Preview video https://www.youtube.com/watch?v=C-XO06wwQuY, Video }, doi = {10.1145/3025453.3025795}, isbn = { 978-1-4503-4655-9/17/05}, year = {2017}, date = {2017-05-08}, booktitle = {ACM CHI Conference on Human Factors in Computing Systems, CHI'17}, pages = {3954-3965}, publisher = {ACM}, series = {CHI'17}, abstract = {We present an approach to alter the perceived appearance of physical objects by controlling their surrounding space. Many real-world objects cannot easily be equipped with displays or actuators in order to change their shape. While common approaches such as projection mapping enable changing the appearance of objects without modifying them, certain surface properties (e.g. highly reflective or transparent surfaces) can make employing these techniques difficult. In this work, we present a conceptual design exploration on how the appearance of an object can be changed by solely altering the space around it, rather than the object itself. In a proof-of-concept implementation, we place objects onto a tabletop display and track them together with users to display perspective-corrected 3D graphics for augmentation. This enables controlling properties such as the perceived size, color, or shape of objects. We characterize the design space of our approach and demonstrate potential applications. For example, we change the contour of a wallet to notify users when their bank account is debited. We envision our approach to gain in importance with increasing ubiquity of display surfaces.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We present an approach to alter the perceived appearance of physical objects by controlling their surrounding space. Many real-world objects cannot easily be equipped with displays or actuators in order to change their shape. While common approaches such as projection mapping enable changing the appearance of objects without modifying them, certain surface properties (e.g. highly reflective or transparent surfaces) can make employing these techniques difficult. In this work, we present a conceptual design exploration on how the appearance of an object can be changed by solely altering the space around it, rather than the object itself. In a proof-of-concept implementation, we place objects onto a tabletop display and track them together with users to display perspective-corrected 3D graphics for augmentation. This enables controlling properties such as the perceived size, color, or shape of objects. We characterize the design space of our approach and demonstrate potential applications. For example, we change the contour of a wallet to notify users when their bank account is debited. We envision our approach to gain in importance with increasing ubiquity of display surfaces. |
Herholz, Philipp; Haase, Felix; Alexa, Marc Diffusion Diagrams: Voronoi Cells and Centroids from Diffusion (Journal Article) Computer Graphics Forum (Proc. of Eurographics), 2017. @article{Herholz:2015:DDb, title = {Diffusion Diagrams: Voronoi Cells and Centroids from Diffusion}, author = {Philipp Herholz and Felix Haase and Marc Alexa}, url = {https://cybertron.cg.tu-berlin.de/~philipp/EG2017/}, year = {2017}, date = {2017-04-24}, journal = {Computer Graphics Forum (Proc. of Eurographics)}, keywords = {}, pubstate = {published}, tppubtype = {article} } |
Alexa, Marc; Hildebrand, Kristian; Lefebvre, Sylvain Optimal Discrete Slicing (Journal Article) ACM Transactions on Graphics, 36 (1), pp. 12:1-12:16, 2017, ISSN: 0730-0301. @article{Alexa:2017:ODS, title = {Optimal Discrete Slicing}, author = {Marc Alexa and Kristian Hildebrand and Sylvain Lefebvre}, url = {https://dl.acm.org/authorize?N655714, ACM Authorized Paper }, doi = {10.1145/2999536}, issn = {0730-0301}, year = {2017}, date = {2017-01-30}, journal = {ACM Transactions on Graphics}, volume = {36}, number = {1}, pages = {12:1-12:16}, abstract = {Slicing is the procedure necessary to prepare a shape for layered manufacturing. There are degrees of freedom in this process, such as the starting point of the slicing sequence and the thickness of each slice. The choice of these parameters influences the manufacturing process and its result: The number of slices significantly affects the time needed for manufacturing, while their thickness affects the error. Assuming a discrete setting, we measure the error as the number of voxels that are incorrectly assigned due to slicing. We provide an algorithm that generates, for a given set of available slice heights and a shape, a slicing that is provably optimal. By optimal, we mean that the algorithm generates sequences with minimal error for any possible number of slices. The algorithm is fast and flexible, that is, it can accommodate a user driven importance modulation of the error function and allows the interactive exploration of the desired quality/time tradeoff. We demonstrate the practical importance of our optimization on several three-dimensional-printed results.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Slicing is the procedure necessary to prepare a shape for layered manufacturing. There are degrees of freedom in this process, such as the starting point of the slicing sequence and the thickness of each slice. The choice of these parameters influences the manufacturing process and its result: The number of slices significantly affects the time needed for manufacturing, while their thickness affects the error. Assuming a discrete setting, we measure the error as the number of voxels that are incorrectly assigned due to slicing. We provide an algorithm that generates, for a given set of available slice heights and a shape, a slicing that is provably optimal. By optimal, we mean that the algorithm generates sequences with minimal error for any possible number of slices. The algorithm is fast and flexible, that is, it can accommodate a user driven importance modulation of the error function and allows the interactive exploration of the desired quality/time tradeoff. We demonstrate the practical importance of our optimization on several three-dimensional-printed results. |
Wang, Xi; Lindlbauer, David; Lessig, Christian; Alexa, Marc Accuracy of Monocular Gaze Tracking on 3D Geometry (Book Chapter) Burch, Michael; Chuang, Lewis; Fisher, Brian; Schmidt, Albrecht; Weiskopf, Daniel (Ed.): Eye Tracking and Visualization, Chapter 10, Springer, 2017, ISBN: 978-3-319-47023-8. @inbook{Wang2017, title = {Accuracy of Monocular Gaze Tracking on 3D Geometry}, author = {Xi Wang and David Lindlbauer and Christian Lessig and Marc Alexa}, editor = {Michael Burch and Lewis Chuang and Brian Fisher and Albrecht Schmidt and Daniel Weiskopf }, url = {http://www.springer.com/de/book/9783319470238}, isbn = {978-3-319-47023-8}, year = {2017}, date = {2017-01-29}, booktitle = {Eye Tracking and Visualization}, publisher = {Springer}, chapter = {10}, abstract = {Many applications such as data visualization or object recognition benefit from accurate knowledge of where a person is looking at. We present a system for accurately tracking gaze positions on a three dimensional object using a monocular head mounted eye tracker. We accomplish this by (1) using digital manufacturing to create stimuli whose geometry is know to high accuracy, (2) embedding fiducial markers into the manufactured objects to reliably estimate the rigid transformation of the object, and, (3) using a perspective model to relate pupil positions to 3D locations. This combination enables the efficient and accurate computation of gaze position on an object from measured pupil positions. We validate the of our system experimentally, achieving an angular resolution of 0.8 degree and a 1.5 % depth error using a simple calibration procedure with 11 points.}, keywords = {}, pubstate = {published}, tppubtype = {inbook} } Many applications such as data visualization or object recognition benefit from accurate knowledge of where a person is looking at. We present a system for accurately tracking gaze positions on a three dimensional object using a monocular head mounted eye tracker. We accomplish this by (1) using digital manufacturing to create stimuli whose geometry is know to high accuracy, (2) embedding fiducial markers into the manufactured objects to reliably estimate the rigid transformation of the object, and, (3) using a perspective model to relate pupil positions to 3D locations. This combination enables the efficient and accurate computation of gaze position on an object from measured pupil positions. We validate the of our system experimentally, achieving an angular resolution of 0.8 degree and a 1.5 % depth error using a simple calibration procedure with 11 points. |
2016 |
Lindlbauer, David; Müller, Jörg; Alexa, Marc Changing the Appearance of Physical Interfaces Through Controlled Transparency (Inproceeding) 29th Annual Symposium on User Interface Software and Technology, UIST'16, pp. 425-435, ACM, 2016, ISBN: 978-1-4503-4189-9. @inproceedings{Lindlbauer2016c, title = {Changing the Appearance of Physical Interfaces Through Controlled Transparency}, author = {David Lindlbauer and Jörg Müller and Marc Alexa}, url = {http://dl.acm.org/authorize?N25034, Paper https://www.cg.tu-berlin.de/research/projects/transparency-controlled-physical-interfaces/, Project website https://www.youtube.com/watch?time_continue=1&v=VrLAoP4wm9o, Preview video https://www.youtube.com/watch?v=f3e3SI-CKBM, Video}, doi = {10.1145/2984511.2984556}, isbn = {978-1-4503-4189-9}, year = {2016}, date = {2016-10-18}, booktitle = {29th Annual Symposium on User Interface Software and Technology, UIST'16}, pages = {425-435}, publisher = {ACM}, series = {UIST'16}, abstract = {We present physical interfaces that change their appearance through controlled transparency. These transparency-controlled physical interfaces are well suited for applications where communication through optical appearance is sufficient, such as ambient display scenarios. They transition between perceived shapes within milliseconds, require no mechanically moving parts and consume little energy. We build 3D physical interfaces with individually controllable parts by laser cutting and folding a single sheet of transparency-controlled material. Electrical connections are engraved in the surface, eliminating the need for wiring individual parts. We consider our work as complementary to current shape-changing interfaces. While our proposed interfaces do not exhibit dynamic tangible qualities, they have unique benefits such as the ability to create apparent holes or nesting of objects. We explore the benefits of transparency-controlled physical interfaces by characterizing their design space and showcase four physical prototypes: two activity indicators, a playful avatar, and a lamp shade with dynamic appearance.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We present physical interfaces that change their appearance through controlled transparency. These transparency-controlled physical interfaces are well suited for applications where communication through optical appearance is sufficient, such as ambient display scenarios. They transition between perceived shapes within milliseconds, require no mechanically moving parts and consume little energy. We build 3D physical interfaces with individually controllable parts by laser cutting and folding a single sheet of transparency-controlled material. Electrical connections are engraved in the surface, eliminating the need for wiring individual parts. We consider our work as complementary to current shape-changing interfaces. While our proposed interfaces do not exhibit dynamic tangible qualities, they have unique benefits such as the ability to create apparent holes or nesting of objects. We explore the benefits of transparency-controlled physical interfaces by characterizing their design space and showcase four physical prototypes: two activity indicators, a playful avatar, and a lamp shade with dynamic appearance. |
Wang, Xi; Lindlbauer, David; Lessig, Christian; Maertens, Marianne; Alexa, Marc Measuring Visual Salience of 3D Printed Objects (Journal Article) IEEE Computer Graphics and Applications Special Issue on Quality Assessment and Perception in Computer Graphics , 2016, ISSN: 0272-1716. @article{Wang2016, title = {Measuring Visual Salience of 3D Printed Objects}, author = {Xi Wang and David Lindlbauer and Christian Lessig and Marianne Maertens and Marc Alexa}, url = {http://cybertron.cg.tu-berlin.de/xiwang/project_saliency/index.html, Project page http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7478427&newsearch=true&queryText=Measuring%20Visual%20Salience%20of%203D%20Printed%20Objects, PDF http://cybertron.cg.tu-berlin.de/xiwang/visual_salience_video.mp4, Video}, doi = {10.1109/MCG.2016.47}, issn = {0272-1716}, year = {2016}, date = {2016-05-25}, journal = {IEEE Computer Graphics and Applications Special Issue on Quality Assessment and Perception in Computer Graphics }, abstract = {We investigate human viewing behavior on physical realizations of 3D objects. Using an eye tracker with scene camera and fiducial markers we are able to gather fixations on the surface of the presented stimuli. This data is used to validate assumptions regarding visual saliency so far only experimentally analyzed using flat stimuli. We provide a way to compare fixation sequences from different subjects as well as a model for generating test sequences of fixations unrelated to the stimuli. This way we can show that human observers agree in their fixations for the same object under similar viewing conditions – as expected based on similar results for flat stimuli. We also develop a simple procedure to validate computational models for visual saliency of 3D objects and use it to show that popular models of mesh salience based on the center surround patterns fail to predict fixations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We investigate human viewing behavior on physical realizations of 3D objects. Using an eye tracker with scene camera and fiducial markers we are able to gather fixations on the surface of the presented stimuli. This data is used to validate assumptions regarding visual saliency so far only experimentally analyzed using flat stimuli. We provide a way to compare fixation sequences from different subjects as well as a model for generating test sequences of fixations unrelated to the stimuli. This way we can show that human observers agree in their fixations for the same object under similar viewing conditions – as expected based on similar results for flat stimuli. We also develop a simple procedure to validate computational models for visual saliency of 3D objects and use it to show that popular models of mesh salience based on the center surround patterns fail to predict fixations. |
Lindlbauer, David; Lilija, Klemen; Walter, Robert; Müller, Jörg Influence of Display Transparency on Background Awareness and Task Performance (Inproceeding) SIGCHI Conference on Human Factors in Computing Systems, CHI'16, pp. 1705-1716, ACM, 2016, ISBN: 978-1-4503-3362-7 . @inproceedings{Lindlbauer2016, title = {Influence of Display Transparency on Background Awareness and Task Performance}, author = {David Lindlbauer and Klemen Lilija and Robert Walter and Jörg Müller }, url = {http://dl.acm.org/authorize?N03925, Paper https://www.youtube.com/watch?v=OATMC0odHrE, Preview video https://www.youtube.com/watch?v=8wWlO97V_OM, Video}, doi = {10.1145/2858036.2858453}, isbn = {978-1-4503-3362-7 }, year = {2016}, date = {2016-05-07}, booktitle = {SIGCHI Conference on Human Factors in Computing Systems, CHI'16}, pages = {1705-1716}, publisher = {ACM}, series = {CHI'16}, abstract = {It has been argued that transparent displays are beneficial for certain tasks by allowing users to simultaneously see on-screen content as well as the environment behind the display. However, it is yet unclear how much in background awareness users gain and if performance suffers for tasks performed on the transparent display, since users are no longer shielded from distractions. Therefore, we investigate the influence of display transparency on task performance and background awareness in a dual-task scenario. We conducted an experiment comparing transparent displays with conventional displays in different horizontal and vertical configurations. Participants performed an attention-demanding primary task on the display while simultaneously observing the background for target stimuli. Our results show that transparent and horizontal displays increase the ability of participants to observe the background while keeping primary task performance constant. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } It has been argued that transparent displays are beneficial for certain tasks by allowing users to simultaneously see on-screen content as well as the environment behind the display. However, it is yet unclear how much in background awareness users gain and if performance suffers for tasks performed on the transparent display, since users are no longer shielded from distractions. Therefore, we investigate the influence of display transparency on task performance and background awareness in a dual-task scenario. We conducted an experiment comparing transparent displays with conventional displays in different horizontal and vertical configurations. Participants performed an attention-demanding primary task on the display while simultaneously observing the background for target stimuli. Our results show that transparent and horizontal displays increase the ability of participants to observe the background while keeping primary task performance constant. |
Lindlbauer, David; Grønbæk, Jens Emil; Birk, Mortensen; Halskov, Kim; Alexa, Marc; Müller, Jörg Combining Shape-Changing Interfaces and Spatial Augmented Reality Enables Extended Object Appearance (Inproceeding) SIGCHI Conference on Human Factors in Computing Systems, CHI'16, pp. 791-802, ACM, 2016, ISBN: 978-1-4503-3362-7. @inproceedings{Lindlbauer2016b, title = {Combining Shape-Changing Interfaces and Spatial Augmented Reality Enables Extended Object Appearance }, author = {David Lindlbauer and Jens Emil Grønbæk and Mortensen Birk and Kim Halskov and Marc Alexa and Jörg Müller}, url = {http://dl.acm.org/authorize?N04344, Paper http://www.cg.tu-berlin.de/research/projects/sci-and-ar/, Project page https://www.youtube.com/watch?v=uyvBJqv3s_M, Preview video https://www.youtube.com/watch?v=fWREdKL2Kus, Video}, doi = {10.1145/2858036.2858457}, isbn = {978-1-4503-3362-7}, year = {2016}, date = {2016-05-07}, booktitle = {SIGCHI Conference on Human Factors in Computing Systems, CHI'16}, pages = {791-802}, publisher = {ACM}, series = {CHI'16}, abstract = {We propose combining shape-changing interfaces and spatial augmented reality for extending the space of appearances and interactions of actuated interfaces. While shape-changing interfaces can dynamically alter the physical appearance of objects, the integration of spatial augmented reality additionally allows for dynamically changing objects' optical appearance with high detail. This way, devices can render currently challenging features such as high frequency texture or fast motion. We frame this combination in the context of computer graphics with analogies to established techniques for increasing the realism of 3D objects such as bump mapping. This extensible framework helps us identify challenges of the two techniques and benefits of their combination. We utilize our prototype shape-changing device enriched with spatial augmented reality through projection mapping to demonstrate the concept. We present a novel mechanical distance-fields algorithm for real-time fitting of mechanically constrained shape-changing devices to arbitrary 3D graphics. Furthermore, we present a technique for increasing effective screen real estate for spatial augmented reality through view-dependent shape change. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We propose combining shape-changing interfaces and spatial augmented reality for extending the space of appearances and interactions of actuated interfaces. While shape-changing interfaces can dynamically alter the physical appearance of objects, the integration of spatial augmented reality additionally allows for dynamically changing objects' optical appearance with high detail. This way, devices can render currently challenging features such as high frequency texture or fast motion. We frame this combination in the context of computer graphics with analogies to established techniques for increasing the realism of 3D objects such as bump mapping. This extensible framework helps us identify challenges of the two techniques and benefits of their combination. We utilize our prototype shape-changing device enriched with spatial augmented reality through projection mapping to demonstrate the concept. We present a novel mechanical distance-fields algorithm for real-time fitting of mechanically constrained shape-changing devices to arbitrary 3D graphics. Furthermore, we present a technique for increasing effective screen real estate for spatial augmented reality through view-dependent shape change. |
Richter, Ronald; Kyprianidis, Jan Eric; Springborn, Boris; Alexa, Marc Constrained Modeling of 3-valent Meshes Using a Hyperbolic Deformation Metric (Journal Article) Computer Graphics Forum, 2016, (in press). @article{Richter:2016:CM3, title = {Constrained Modeling of 3-valent Meshes Using a Hyperbolic Deformation Metric}, author = {Ronald Richter and Jan Eric Kyprianidis and Boris Springborn and Marc Alexa}, doi = {10.1111/cgf.12805}, year = {2016}, date = {2016-01-28}, journal = {Computer Graphics Forum}, abstract = {Polygon meshes with 3-valent vertices often occur as the frame of free-form surfaces in architecture, in which rigid beams are connected in rigid joints. For modeling such meshes it is desirable to measure the deformation of the joints' shapes. We show that it is natural to represent joint shapes as points in hyperbolic 3-space. This endows the space of joint shapes with a geometric structure that facilitates computation. We use this structure to optimize meshes towards different constraints, and we believe that it will be useful for other applications as well.}, note = {in press}, keywords = {}, pubstate = {published}, tppubtype = {article} } Polygon meshes with 3-valent vertices often occur as the frame of free-form surfaces in architecture, in which rigid beams are connected in rigid joints. For modeling such meshes it is desirable to measure the deformation of the joints' shapes. We show that it is natural to represent joint shapes as points in hyperbolic 3-space. This endows the space of joint shapes with a geometric structure that facilitates computation. We use this structure to optimize meshes towards different constraints, and we believe that it will be useful for other applications as well. |
2015 |
Tompkin, James; Muff, Samuel; McCann, James; Pfister, Hanspeter; Kautz, Jan; Alexa, Marc; Matusik, Wojciech Joint 5D Pen Input for Light Field Displays (Inproceeding) The 28th Annual ACM Symposium on User Interface Software and Technology, UIST'15, pp. 637–647, ACM, 2015, ISBN: 978-1-4503-3779-3. @inproceedings{Tompkin:2015:J5D, title = {Joint 5D Pen Input for Light Field Displays}, author = {James Tompkin and Samuel Muff and James McCann and Hanspeter Pfister and Jan Kautz and Marc Alexa and Wojciech Matusik}, url = {http://jamestompkin.com/pubs/lightfieldpainting/index.html, Project Website}, doi = {10.1145/2807442.2807477}, isbn = {978-1-4503-3779-3}, year = {2015}, date = {2015-11-10}, booktitle = {The 28th Annual ACM Symposium on User Interface Software and Technology, UIST'15}, pages = {637--647}, publisher = {ACM}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
Miruchna, Viktor; Walter, Robert; Lindlbauer, David; Lehmann, Maren; von Klitzing, Regina; Müller, Jörg GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel (Inproceeding) The 28th Annual ACM Symposium on User Interface Software and Technology, UIST'15, pp. 3–10, 2015, ISBN: 978-1-4503-3779-3. @inproceedings{Miruchna2015, title = {GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel}, author = {Viktor Miruchna and Robert Walter and David Lindlbauer and Maren Lehmann and Regina von Klitzing and Jörg Müller}, url = {http://dl.acm.org/authorize?N07197, Paper https://www.youtube.com/watch?v=o8W6qbwPhwU, Preview video https://www.youtube.com/watch?v=C40bl9qmLV0, Video}, doi = {10.1145/2807442.2807487}, isbn = {978-1-4503-3779-3}, year = {2015}, date = {2015-11-08}, booktitle = {The 28th Annual ACM Symposium on User Interface Software and Technology, UIST'15}, journal = {ACM User Interface Software and Technology Symposium (UIST) 2015}, pages = {3--10}, series = {UIST'15}, abstract = {We present GelTouch, a gel-based layer that can selectively transition between soft and stiff to provide tactile multi-touch feedback. It is flexible, transparent when not activated, and contains no mechanical, electromagnetic, or hydraulic components, resulting in a compact form factor (a 2mm thin touchscreen layer for our prototype). The activated areas can be morphed freely and continuously, without being limited to fixed, predefined shapes. GelTouch consists of a poly(N-isopropylacrylamide) gel layer which alters its viscoelasticity when activated by applying heat (>32 C). We present three different activation techniques: 1) Indium Tin Oxide (ITO) as a heating element that enables tactile feedback through individually addressable taxels; 2) predefined tactile areas of engraved ITO, that can be layered and combined; 3) complex arrangements of resistance wire that create thin tactile edges. We present a tablet with 6x4 tactile areas, enabling a tactile numpad, slider, and thumbstick. We show that the gel is up to 25 times stiffer when activated and that users detect tactile features reliably (94.8%).}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We present GelTouch, a gel-based layer that can selectively transition between soft and stiff to provide tactile multi-touch feedback. It is flexible, transparent when not activated, and contains no mechanical, electromagnetic, or hydraulic components, resulting in a compact form factor (a 2mm thin touchscreen layer for our prototype). The activated areas can be morphed freely and continuously, without being limited to fixed, predefined shapes. GelTouch consists of a poly(N-isopropylacrylamide) gel layer which alters its viscoelasticity when activated by applying heat (>32 C). We present three different activation techniques: 1) Indium Tin Oxide (ITO) as a heating element that enables tactile feedback through individually addressable taxels; 2) predefined tactile areas of engraved ITO, that can be layered and combined; 3) complex arrangements of resistance wire that create thin tactile edges. We present a tablet with 6x4 tactile areas, enabling a tactile numpad, slider, and thumbstick. We show that the gel is up to 25 times stiffer when activated and that users detect tactile features reliably (94.8%). |
Wang, Xi; Lindlbauer, David; Lessig, Christian; Alexa, Marc Accuracy of Monocular Gaze Tracking on 3D Geometry (Incollection) Workshop on Eye Tracking and Visualization (ETVIS) co-located with IEEE VIS, 2015. @incollection{Wang2015, title = {Accuracy of Monocular Gaze Tracking on 3D Geometry}, author = {Xi Wang and David Lindlbauer and Christian Lessig and Marc Alexa}, url = {http://cybertron.cg.tu-berlin.de/xiwang/project_saliency/index.html, Project page http://www.vis.uni-stuttgart.de/etvis/ETVIS_2015.html, PDF}, year = {2015}, date = {2015-10-25}, booktitle = {Workshop on Eye Tracking and Visualization (ETVIS) co-located with IEEE VIS}, abstract = {Many applications in visualization benefit from accurate knowledge of where a person is looking at. We present a system for accurately tracking gaze positions on a three dimensional object using a monocular head mounted eye tracker. We accomplish this by 1) using digital manufacturing to create stimuli with accurately known geometry, 2) embedding fiducial markers directly into the manufactured objects to reliably estimate the rigid transformation of the object, and, 3) using a perspective model to relate pupil positions to 3D locations. This combination enables the efficient and accurate computation of gaze position on an object from measured pupil positions. We validate the accuracy of our system experimentally, achieving an angular resolution of 0.8◦ and a 1.5% depth error using a simple calibration procedure with 11 points.}, keywords = {}, pubstate = {published}, tppubtype = {incollection} } Many applications in visualization benefit from accurate knowledge of where a person is looking at. We present a system for accurately tracking gaze positions on a three dimensional object using a monocular head mounted eye tracker. We accomplish this by 1) using digital manufacturing to create stimuli with accurately known geometry, 2) embedding fiducial markers directly into the manufactured objects to reliably estimate the rigid transformation of the object, and, 3) using a perspective model to relate pupil positions to 3D locations. This combination enables the efficient and accurate computation of gaze position on an object from measured pupil positions. We validate the accuracy of our system experimentally, achieving an angular resolution of 0.8◦ and a 1.5% depth error using a simple calibration procedure with 11 points. |
Richter, Ronald; Alexa, Marc Beam Meshes (Journal Article) Computers & Graphics, 2015. @article{Richter:2015:BM, title = {Beam Meshes}, author = {Ronald Richter and Marc Alexa}, url = {http://cybertron.cg.tu-berlin.de/ronald/files/beams.pdf}, doi = {doi:10.1016/j.cag.2015.08.007}, year = {2015}, date = {2015-10-15}, journal = {Computers & Graphics}, abstract = {We present an approach for representing free-form geometry with a set of beams with rectangular cross-section. This requires the edges of the mesh to be free of torsion. We generate such meshes in a two step procedure: first we generate a coarse, low valence mesh approximation using a new variant of anisotropic centroidal Voronoi tessellation. Then we modify the mesh and create beams by incorporating constraints using iterative optimization. For fabrication we provide solutions for designing the joints, generating a cutting place for CNC machines, and suggesting a building sequence. The approach is demonstrated at several virtual and real results.}, keywords = {}, pubstate = {published}, tppubtype = {article} } We present an approach for representing free-form geometry with a set of beams with rectangular cross-section. This requires the edges of the mesh to be free of torsion. We generate such meshes in a two step procedure: first we generate a coarse, low valence mesh approximation using a new variant of anisotropic centroidal Voronoi tessellation. Then we modify the mesh and create beams by incorporating constraints using iterative optimization. For fabrication we provide solutions for designing the joints, generating a cutting place for CNC machines, and suggesting a building sequence. The approach is demonstrated at several virtual and real results. |
Fender, Andreas; Müller, Jörg; Lindlbauer, David Creature Teacher: A Performance-Based Animation System for Creating Cyclic Movements (Inproceeding) The 3rd ACM Symposium on Spatial User Interaction, SUI'15, pp. 113–122, 2015, ISBN: 978-1-4503-3703-8. @inproceedings{Fender2015, title = {Creature Teacher: A Performance-Based Animation System for Creating Cyclic Movements}, author = {Andreas Fender and Jörg Müller and David Lindlbauer}, url = {http://dl.acm.org/authorize?N07199, Paper}, doi = {10.1145/2788940.2788944}, isbn = {978-1-4503-3703-8}, year = {2015}, date = {2015-08-08}, booktitle = {The 3rd ACM Symposium on Spatial User Interaction, SUI'15}, pages = {113--122}, abstract = {We present Creature Teacher, a performance-based animation system for creating cyclic movements. Users directly manipulate body parts of a virtual character by using their hands. Creature Teacher's generic approach makes it possible to animate rigged 3D models with nearly arbitrary topology (e.g., non-humanoid) without requiring specialized user-to-character mappings or predefined movements. We use a bimanual interaction paradigm, allowing users to select parts of the model with one hand and manipulate them with the other hand. Cyclic movements of body parts during manipulation are detected and repeatedly played back - also while animating other body parts. Our approach of taking cyclic movements as an input makes mode switching between recording and playback obsolete and allows for fast and seamless creation of animations. We show that novice users with no animation background were able to create expressive cyclic animations for initially static virtual 3D creatures.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We present Creature Teacher, a performance-based animation system for creating cyclic movements. Users directly manipulate body parts of a virtual character by using their hands. Creature Teacher's generic approach makes it possible to animate rigged 3D models with nearly arbitrary topology (e.g., non-humanoid) without requiring specialized user-to-character mappings or predefined movements. We use a bimanual interaction paradigm, allowing users to select parts of the model with one hand and manipulate them with the other hand. Cyclic movements of body parts during manipulation are detected and repeatedly played back - also while animating other body parts. Our approach of taking cyclic movements as an input makes mode switching between recording and playback obsolete and allows for fast and seamless creation of animations. We show that novice users with no animation background were able to create expressive cyclic animations for initially static virtual 3D creatures. |
Walter, Robert; Bulling, Andreas; Lindlbauer, David; Schüssler, Martin; Müller, Jörg Analyzing Visual Attention During Whole Body Interaction with Public Displays (Inproceeding) The 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UBICOMP'15, pp. 1263–1267 , 2015, ISBN: 978-1-4503-3574-4. @inproceedings{Walter2015, title = {Analyzing Visual Attention During Whole Body Interaction with Public Displays}, author = {Robert Walter and Andreas Bulling and David Lindlbauer and Martin Schüssler and Jörg Müller}, url = {http://dl.acm.org/authorize?N07198, Paper}, doi = {10.1145/2750858.2804255}, isbn = {978-1-4503-3574-4}, year = {2015}, date = {2015-08-07}, booktitle = {The 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UBICOMP'15}, pages = {1263--1267 }, abstract = {While whole body interaction can enrich user experience on public displays, it remains unclear how common visualizations of user representations impact users' ability to perceive content on the display. In this work we use a head-mounted eye tracker to record visual behavior of 25 users interacting with a public display game that uses a silhouette user representation, mirroring the users' movements. Results from visual attention analysis as well as post-hoc recall and recognition tasks on display contents reveal that visual attention is mostly on users' silhouette while peripheral screen elements remain largely unattended. In our experiment, content attached to the user representation attracted significantly more attention than other screen contents, while content placed at the top and bottom of the screen attracted significantly less. Screen contents attached to the user representation were also significantly better remembered than those at the top and bottom of the screen.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } While whole body interaction can enrich user experience on public displays, it remains unclear how common visualizations of user representations impact users' ability to perceive content on the display. In this work we use a head-mounted eye tracker to record visual behavior of 25 users interacting with a public display game that uses a silhouette user representation, mirroring the users' movements. Results from visual attention analysis as well as post-hoc recall and recognition tasks on display contents reveal that visual attention is mostly on users' silhouette while peripheral screen elements remain largely unattended. In our experiment, content attached to the user representation attracted significantly more attention than other screen contents, while content placed at the top and bottom of the screen attracted significantly less. Screen contents attached to the user representation were also significantly better remembered than those at the top and bottom of the screen. |
Herholz, Philipp; Kyprianidis, Jan Eric; Alexa, Marc Perfect Laplacians for Polygon Meshes (Journal Article) Computer Graphics Forum (SGP 2015), 2015. @article{Herholz:2015:PLP, title = {Perfect Laplacians for Polygon Meshes}, author = {Philipp Herholz and Jan Eric Kyprianidis and Marc Alexa }, editor = {Mirela Ben-Chen and Ligang Liu}, url = {http://cybertron.cg.tu-berlin.de/~philipp/SGP2015/, Project Website}, doi = {10.1111/cgf.12709}, year = {2015}, date = {2015-07-06}, journal = {Computer Graphics Forum (SGP 2015)}, abstract = {A discrete Laplace-Beltrami operator is called perfect if it possesses all the important properties of its smooth counterpart. It is known which triangle meshes admit perfect Laplace operators and how to fix any other mesh by changing the combinatorics. We extend the characterization of meshes that admit perfect Laplacians to general polygon meshes. More importantly, we provide an algorithm that computes a perfect Laplace operator for any polygon mesh without changing the combinatorics, although, possibly changing the embedding. We evaluate this algorithm and demonstrate it at applications.}, keywords = {}, pubstate = {published}, tppubtype = {article} } A discrete Laplace-Beltrami operator is called perfect if it possesses all the important properties of its smooth counterpart. It is known which triangle meshes admit perfect Laplace operators and how to fix any other mesh by changing the combinatorics. We extend the characterization of meshes that admit perfect Laplacians to general polygon meshes. More importantly, we provide an algorithm that computes a perfect Laplace operator for any polygon mesh without changing the combinatorics, although, possibly changing the embedding. We evaluate this algorithm and demonstrate it at applications. |
Publications
2023 |
Differentiable Shadow Mapping for Efficient Inverse Graphics (Inproceeding) Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 142-153, 2023. |
2022 |
α-Functions: Piecewise-linear Approximation from Noisy and Hermite Data (Inproceeding) ACM SIGGRAPH 2022 Conference Proceedings, pp. 1-9, 2022. |
Super-Fibonacci Spirals: Fast, Low-Discrepancy Sampling of SO(3) (Inproceeding) Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8291-8300, 2022. |
Multi-View Mesh Reconstruction With Neural Deferred Shading (Inproceeding) Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6187-6197, 2022. |
2021 |
Hardware Design and Accurate Simulation of Structured-Light Scanning for Benchmarking of 3D Reconstruction Algorithms (Incollection) Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021), 2021. |
The Diamond Laplace for Polygonal and Polyhedral Meshes (Journal Article) Computer Graphics Forum, 40 (5), pp. 217-230, 2021. |
Gauss Stylization: Interactive Artistic Mesh Modeling based on Preferred Surface Normals (Journal Article) Computer Graphics Forum, 40 (5), pp. 33-43, 2021. |
Fast Updates for Least-Squares Rotational Alignment (Journal Article) Computer Graphics Forum, 40 (2), pp. 12-22, 2021. |
PolyCover: Shape Approximating With Discrete Surface Orientation (Journal Article) IEEE Computer Graphics and Applications, 41 (3), pp. 85-95, 2021, ISSN: 0272-1716. |
2020 |
Conforming Weighted Delaunay Triangulation (Journal Article) ACM Transactions on Graphics, 39 (6), pp. 248, 2020. |
Computational discrimination between natural images based on gaze during mental imagery (Journal Article) Scientific Reports, 10 , pp. 13035, 2020, ISSN: 2045-2322. |
Properties of Laplace Operators for Tetrahedral Meshes (Journal Article) Computer Graphics Forum, 39 (5), pp. 55-68, 2020. |
2019 |
Keep It Simple: Depth-based Dynamic Adjustment of Rendering for Head-mounted Displays Decreases Visual Comfort (Journal Article) ACM Trans. Appl. Percept. , 16 (3), pp. 16, 2019. |
The mean point of vergence is biased under projection (Journal Article) Journal of Eye Movement Research, 12 (4), 2019, ISSN: 1995-8692. |
Computational discrimination between natural images based on gaze during mental imagery (Miscellaneous) Presented at 20th European Conference on Eye Morvements (ECEM), 2019. |
CurviSlicer: slightly curved slicing for 3-axis printers (Journal Article) ACM Transactions on Graphics, 38 (4), 2019. |
Harmonic Triangulations (Journal Article) ACM Transactions on Graphics, 38 (4), pp. 54, 2019. |
The Mental Image Revealed by Gaze Tracking (Conference) Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19) , ACM, 2019, ISBN: 978-1-4503-5970-2. |
ABC: A Big CAD Model Dataset for Geometric Deep Learning (Conference) Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2019. |
Understanding Metamaterial Mechanisms (Conference) Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19), ACM, 2019. |
Efficient Computation of Smoothed Exponential Maps (Journal Article) Computer Graphics Forum, 38 , pp. 79–90, 2019. |
Center of circle after perspective transformation (Online) arXiv preprint arXiv:1902.04541 2019. |
2018 |
Tracking the Gaze on Objects in 3D: How do People Really Look at the Bunny? (Journal Article) ACM Transactions on Graphics, 37 (6), 2018. |
Factor Once: Reusing Cholesky Factorizations on Sub-Meshes (Journal Article) ACM Transaction on Graphics, 37 (6), 2018. |
Maps of Visual Importance: What is recalled from visual episodic memory? (Miscellaneous) Presented at 41st European Conference on Visual Perception (ECVP), 2018. |
Design and analysis of directional front projection screens (Journal Article) Computers & Graphics, 74 , pp. 213-224, 2018, ISSN: 0097-8493. |
Remixed Reality: Manipulating Space and Time in Augmented Reality (Inproceeding) CHI 2018, ACM, 2018, ISBN: 978-1-4503-5620-6. |
OptiSpace: Automated Placement of Interactive 3D Projection Mapping Content (Inproceeding) Conference on Human Factors in Computing Systems, CHI'18, ACM, 2018, ISBN: 978-1-4503-5620-6. |
2017 |
Maps of Visual Importance (Online) arXiv preprint arXiv:1712.02142 2017. |
Localized solutions of sparse linear systems for geometry processing (Journal Article) ACM Transactions on Graphics, 36 (6), 2017. |
HeatSpace: Automatic Placement of Displays by Empirical Analysis of User Behavior (Inproceeding) ACM Symposium on User Interface Software and Technology, UIST'17, pp. 611-621 , ACM, 2017, ISBN: 978-1-4503-4981-9. |
Optically Dynamic Interfaces (Inproceeding) Adjunct Publication of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST'17 Adjunct, pp. 107-110 , ACM, 2017, ISBN: 978-1-4503-5419-6. |
3D Eye Tracking in Monocular and Binocular Conditions (Miscellaneous) Presented at 19th European Conference on Eye Morvements (ECEM), 2017. |
Directional Screens (Inproceeding) Proceedings of the 1st Annual ACM Symposium on Computational Fabrication, pp. 1:1–1:10, ACM, New York, NY, USA, 2017, ISBN: 978-1-4503-4999-4. |
Changing the Appearance of Real-World Objects by Modifying Their Surroundings (Inproceeding) ACM CHI Conference on Human Factors in Computing Systems, CHI'17, pp. 3954-3965, ACM, 2017, ISBN: 978-1-4503-4655-9/17/05. |
Diffusion Diagrams: Voronoi Cells and Centroids from Diffusion (Journal Article) Computer Graphics Forum (Proc. of Eurographics), 2017. |
Optimal Discrete Slicing (Journal Article) ACM Transactions on Graphics, 36 (1), pp. 12:1-12:16, 2017, ISSN: 0730-0301. |
Accuracy of Monocular Gaze Tracking on 3D Geometry (Book Chapter) Burch, Michael; Chuang, Lewis; Fisher, Brian; Schmidt, Albrecht; Weiskopf, Daniel (Ed.): Eye Tracking and Visualization, Chapter 10, Springer, 2017, ISBN: 978-3-319-47023-8. |
2016 |
Changing the Appearance of Physical Interfaces Through Controlled Transparency (Inproceeding) 29th Annual Symposium on User Interface Software and Technology, UIST'16, pp. 425-435, ACM, 2016, ISBN: 978-1-4503-4189-9. |
Measuring Visual Salience of 3D Printed Objects (Journal Article) IEEE Computer Graphics and Applications Special Issue on Quality Assessment and Perception in Computer Graphics , 2016, ISSN: 0272-1716. |
Influence of Display Transparency on Background Awareness and Task Performance (Inproceeding) SIGCHI Conference on Human Factors in Computing Systems, CHI'16, pp. 1705-1716, ACM, 2016, ISBN: 978-1-4503-3362-7 . |
Combining Shape-Changing Interfaces and Spatial Augmented Reality Enables Extended Object Appearance (Inproceeding) SIGCHI Conference on Human Factors in Computing Systems, CHI'16, pp. 791-802, ACM, 2016, ISBN: 978-1-4503-3362-7. |
Constrained Modeling of 3-valent Meshes Using a Hyperbolic Deformation Metric (Journal Article) Computer Graphics Forum, 2016, (in press). |
2015 |
Joint 5D Pen Input for Light Field Displays (Inproceeding) The 28th Annual ACM Symposium on User Interface Software and Technology, UIST'15, pp. 637–647, ACM, 2015, ISBN: 978-1-4503-3779-3. |
GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel (Inproceeding) The 28th Annual ACM Symposium on User Interface Software and Technology, UIST'15, pp. 3–10, 2015, ISBN: 978-1-4503-3779-3. |
Accuracy of Monocular Gaze Tracking on 3D Geometry (Incollection) Workshop on Eye Tracking and Visualization (ETVIS) co-located with IEEE VIS, 2015. |
Beam Meshes (Journal Article) Computers & Graphics, 2015. |
Creature Teacher: A Performance-Based Animation System for Creating Cyclic Movements (Inproceeding) The 3rd ACM Symposium on Spatial User Interaction, SUI'15, pp. 113–122, 2015, ISBN: 978-1-4503-3703-8. |
Analyzing Visual Attention During Whole Body Interaction with Public Displays (Inproceeding) The 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UBICOMP'15, pp. 1263–1267 , 2015, ISBN: 978-1-4503-3574-4. |
Perfect Laplacians for Polygon Meshes (Journal Article) Computer Graphics Forum (SGP 2015), 2015. |