La prochaine journée du Groupe de Travail Rendu du GDR IG RV aura lieu à Paris le 10 juin 2015, dans les locaux de Telecom ParisTech, (amphi Estaunié) . Vous êtes bien sur toujours bienvenu pour participer à cette journée.
Pour info, la veille au soir il y a la soirée "Les français sélectionnés à Siggraph"
Programme de cette journe
10h : Gurprit Singh- Variance Analysis for Monte Carlo Integration
10h30 : Jonathan Dupuy - Extracting Microfacet-based BRDF Parameters from Arbitrary Materials with Power Iterations
11h00 : Julien Gerhards - Partitioned Shadow Volumes
11h30 : Pause Café (15 min)
11h45 : Invited Talk - Ken Perlin, NYU Media Research Lab - Prototyping the Future
12h45 - 14h : Repas dans les nombreux restos du coin
14h : Bruno Stefanizzi , AMD - Rendu sur architecture hétérogène (CPU+GPU)
14h30 : Carlos Zubiaga - MatCap Decomposition for Dynamic Appearance Manipulation
15h : Basile Sauvage - Simplification of Meshes with Digitized Radiance
15h30 : Pause Café (15 min)
15h45 : Mahmoud Omidvar - Optimisation de cache de luminance en éclairage global
16h15 : Kenneth Vanhoey - Unifying Color and Texture Transfer for Predictive Appearance Manipulation 15h : Basile Sauvage - Simplification of Meshes with Digitized Radiance
15h30 : Pause Café (15 min)
15h45 : Mahmoud Omidvar - Optimisation de cache de luminance en éclairage global
16h45 : Adrien Gruson - Optimisation de la répartition de l’erreur pour le rendu progressif avec Metropolis
17h15 : Table ronde / discussion
Résumés des présentations
Prototyping the Future
Ken Perlin
The question our lab at NYU is asking is "How might people in the future communicate with each other in every day life, as computation and display technologies continue to develop, to the point where computer-mediated interfaces are so ubiquitous and intuitive as to be imperceptible?" To address this, we are combining features of Augmented and Virtual Realities. Participants walk freely around in physical space, interacting with other people and physical objects, just as they do in everyday life. Yet everything those participants see
and hear is computer-mediated, thereby allowing them to share any reality they wish. A combination of wireless VR, motion capture and 3D audio synthesis simulate the experience of future high resolution contact lens and spatial audio displays.
***
Gurprit Singh
We propose a new spectral analysis of the variance in Monte Carlo integration, expressed in terms of the power spectra of the sampling pattern and the integrand involved. We build our framework in the Euclidean space using Fourier tools and on the sphere using spherical harmonics. We further provide a theoretical background that explains how our spherical framework can be extended to the hemi-spherical domain. We use our framework to estimate the variance convergence rate of different state-of-the-art sampling patterns in both the Euclidean and spherical domains, as the number of samples increases. Furthermore, we formulate design principles for constructing sampling methods that can be tailored according to available resources. We validate our theoretical framework by performing numerical integration over several integrands sampled using different sampling patterns.
***
Extracting Microfacet-based BRDF Parameters from Arbitrary Materials with Power Iterations
Jonathan Dupuy
We introduce a novel fitting procedure that takes as input an arbitrary material, possibly anisotropic, and automatically converts it to a microfacet BRDF. Our algorithm is based on the property that the distribution of microfacets may be retrieved by solving an eigenvector problem that is built solely from backscattering samples. We show that the eigenvector associated to the largest eigenvalue is always the only solution to this problem, and compute it using the power iteration method.
This approach is straightforward to implement, much faster to compute, and considerably more robust than solutions based on nonlinear optimizations. In addition, we provide simple conversion procedures of our fits into both Beckmann and GGX roughness parameters, and discuss the advantages of microfacet slope space to make our fits editable. We apply our method to measured materials from two large databases that include anisotropic materials, and demonstrate the benefits of spatially varying roughness on texture mapped geometric models.
***
Partitioned Shadow Volumes
Julien Gerhards
Real-time
shadows remain a challenging problem in computer graphics. In this
context, shadow algorithms generally rely either on shadow mapping or
shadow volumes. This paper rediscovers an old class of algorithms that
build a binary space partition over the shadow volumes. For almost 20
years, such methods have received little attention as they have been
considered lacking of both robustness and efficiency. We show that these
issues can be overcome, leading to a simple and robust shadow
algorithm. Hence we demonstrate that this kind of approach can reach a
high level of performance. Our algorithm uses a new partition ing
strategy which avoids any polygon clipping. It relies on a Ternary
Object Partitioning tree, a new data structure used to find if an image
point is shadowed. Our method works on a triangle soup and its memory
footprint is fixed. Our experiments show that it is efficient and
robust, including for finely tessellated models
***
Rendu sur architecture hétérogène (CPU+GPU)
Bruno Stefanizzi
Nous exposerons les dernières technologies et API de rendu sur architecture unifiant le CPU et le GPU pour une meilleur performance et efficacité énergétique du matériel.
***
Basile Sauvage
View-dependent surface color of virtual objects can be represented by
outgoing radiance of the surface.
In this paper we tackle the processing of outgoing radiance stored as a
vertex attribute of triangle meshes.
Data resulting from an acquisition process can be very large and
computationally intensive to render.
We show that when reducing the global memory footprint of such acquired
objects, smartly reducing the spatial resolution is an effective
strategy for overall appearance preservation.
Whereas state-of-the-art simplification processes only consider scalar
or vectorial attributes, we conversely consider radiance functions
defined on the surface for which we derive a metric.
For this purpose, several tools are introduced like coherent radiance
function interpolation, gradient computation, and distance measurements.
Both synthetic and acquired examples illustrate the benefit and the
relevance of this radiance-aware simplification process.
***
Mahmoud Omidvar
La
simulation d’éclairage est un processus qui s’avère plus complexe
(temps de calcul, coût mémoire, mise en oeuvre complexe) pour les
matériaux brillants que pour les matériaux lambertiens ou spéculaires.
Afin d’éviter le calcul coûteux de certains termes de l’équation de
luminance (convolution entre la fonction de réflexion des matériaux et
la distribution de luminance de l’environnement), nous proposons une
nouvelle structure de données appelée Source Surfacique Équivalente
(SSE). L’utilisation de cette structure de données nécessite le
pré-calcul puis la modélisation du comportement des matériaux soumis à
divers types de sources lumineuses (positions, étendues). L’exploitation
d’algorithmes génétiques nous permet de déterminer les paramètres des
modèles de BRDF, en introduisant une première source d’approximation.
L’approche de simulation d’éclairage utilisée est basée sur un cache de
luminance. Ce dernier consiste à stocker l’éclairement incident sous
forme de SSE en des points appelés enregistrements. Durant la simulation
d’éclairage, l’environnement lumineux doit également être assimilé à un
ensemble de sources surfaciques équivalentes (en chaque enregistrement)
qu’il convient de définir de manière dynamique. L’approche des Sources
Surfaciques Équivalentes est particulièrement intéressante pour des
matériaux rugueux ou pour les matériaux très brillants placés dans des
environnements relativement uniformes. L’utilisation de SSE a permis de
réduire considérablement à la fois le coût mémoire et le temps de calcul.
***
Unifying Color and Texture Transfer for Predictive Appearance Manipulation
Recent color transfer methods use local information to
learn the transformation from a source to an exemplar image, and then
transfer this appearance change to a target image. These solutions
achieve very successful results for general mood changes, e.g., changing
the appearance of an image from ``sunny'' to ``overcast''. However,
such methods have a hard time creating new image content, such as leaves
on a bare tree. Texture transfer, on the other hand, can synthesize
such content but tends to destroy image structure. We propose the first
algorithm that unifies color and texture transfer, outperforming both by
leveraging their respective strengths. A key novelty in our approach
resides in teasing apart appearance changes that can be modeled simply
as changes in color versus those that require new image content to be
generated. Our method starts with an analysis phase which evaluates the
success of color transfer by comparing the exemplar with the source.
This analysis then drives a selective, iterative texture transfer
algorithm that simultaneously predicts the success of color transfer on
the target and synthesizes new content where needed. We demonstrate our
unified algorithm by transferring large temporal changes between
photographs, such as change of season -- e.g., leaves on bare trees or
piles of snow on a street -- and flooding.
***
Optimisation de la répartition de l’erreur pour le rendu progressif avec Metropolis
Optimisation de la répartition de l’erreur pour le rendu progressif avec Metropolis
Adrien Gruson
La
répartition de l’erreur sur le plan image est un élément important de
réduction du bruit pour des images générées avec des algorithmes de
rendu réaliste. Les algorithmes de rendu basés Metropolis-Hasting sont
bien adaptés à des scènes complexes (visibilité, matériaux, ...).
Cependant, dans ces méthodes il est difficile de contrôler la
répartition des échantillons sur le plan image. Certains travaux ont
proposé des solutions à ce problème dans le cadre du tracé de chemin.
Nous présenterons une nouvelle fonction d’importance ayant pour but de
répartir l’erreur sur le plan image dans le cas du photon mapping
progressif.
***
Aucun commentaire:
Enregistrer un commentaire