Visionday 2013 Programme
Programme top
= Industrial image analysis,
= Technologies for life science,
= Computer graphics,
= Medical image analysis
ABSTRACTS
Plenary talk
 
  1. OverCoat: A Journey In Between Primary and Secondary Space
    Robert Sumner / Disney Research
    Primary space refers to the 3D world around us in which objects live, while secondary space denotes the 2D canvas on which depictions of those objects are made. Over the years, much of computer graphics has focused on photorealism so that secondary space renditions match as closely as possible to their primary space counterparts. Sometimes, however, the artistic vision for an animation project requires the opposite approach. OverCoat is a prototype digital painting system developed at Disney Research Zurich that blurs the distinction between these two concepts by generalizing the traditional 2D painting metaphor to the third dimension. Painted strokes are upgraded to the primary space using an implicit canvas concept and a stroke embedding procedure. Traditional 3D objects, on the other hand, are downgraded to play a subservient role in the embedding procedure. A hybrid rendering model provides the bridge back to the secondary space of the image. Taken together, this technology enables artists to create a new class of expressive 3D imagery while maintaining a painterly aesthetic typically restricted to 2D artwork.

    In this talk, I will give a behind-the-scenes look at the design and development of OverCoat. After first showing how OverCoat builds upon other inspiring research in stroke-based rendering, I'll present the motivating principles that guided the design of the system. Then, I'll discuss the inherent challenges in bringing this 3D painting concept to life, and highlight the key technical advances we made in order to overcome them. Finally, I'll show some of the most recent advances we've made in our prototype system.

    Bio:

    Dr. Robert Sumner is the Associate Director of Disney Research Zurich and leads the lab's research on animation and interactive graphics. His research group strives to bypass technical barriers in the animation production pipeline with new algorithms that expand the designer's creative toolbox in terms of depiction, movement, deformation, stylization, control, and efficiency. Robert received a B.S. (1998) degree in computer science from the Georgia Institute of Technology and his M.S. (2001) and Ph.D. (2005) from the Massachusetts Institute of Technology. He spent three years as a postdoctoral researcher at ETH Zurich before joining Disney. Robert is an adjunct lecturer at ETH Zurich and teaches a course called the Game Programming Laboratory in which students work in small teams to design and implement novel video games.
Technologies for life science
 
  1. Behaviour analysis using optical flow
    Ruta Gronskyte / DTU Compute
    We propose a new approach for monitoring animal movement in thermal videos. The method distinguishes movements as walking to the expected direction from the walking opposite direction, stopping or lying down. The method is based on optical flow (OF), blob detection and multivariate principal component analysis (MPCA). Our proposed method gives fast and easy interpretation of the video recordings. The optical flow is calculated, filtered, quantified and then MPCA is performed. Results arepresented as quality control chart of principal components. The method works on-line with pre-training.
  2. Big data - from twitter to smartphones
    Sune Lehmann Jørgensen / DTU Compute
    A personal perspective on Big Data. I start by explaining how we used over 300 million tweets to map the collective mood of the United States. The mood of each tweet was inferred using a simple word-list, with results are represented as density-preserving cartograms. A cartogram is a map in which the mapping variable (in this case, the number of tweets) is substituted for the true land area. For the final part of the talk, I discuss why "good data" can be more interesting than big data, and how that has altered the direction of my research, how I've moved from data-analysis to generating my own high-resolution data set that takes advantage of the recent technological developments to push the current boundaries of a quantitatively based understanding of social systems. Specifically, my team uses modern smart phones record the network of social interactions with very high resolution on a variety of communication channels, e.g. face-to-face via bluetooth, geolocation via GPS, social network data (Facebook, Twitter) via Apps, and telecommunication data via call logs.
  3. Computer-intensive methods for the analysis of high-throughput genetic data
    Gilles Guillot / DTU Compute
    What is the geographic origin of an illegally traded plant or animal seized by the airport custom?
    This question can be answered by comparing the DNA sequence harboured by this individual to that of other individuals of known geographic origin.
    A quantitative and objective comparison requires a statistical model that describes genetic variation in space.
    We present here a method based on a Gaussian random field model.
    We also show how it can be implemented with the inference machinery known as INLA-SPDE which provides fast Bayesian estimates without resorting to MCMC.
    The accuracy of inferences is illustrated with data simulated from various statistical models and also from a more biologically grounded model that accounts explicitly for demography, mutation and migration.
    The methods proposed is shown to be highly accurate for modern datasets consisting of a large number (e.g. larger than 1000) of SNP or AFLP markers.
    This is talk is based on an on-going collaboration with H. Jonsson and L. Orlando at Centre for Geogenetics, KU.
  4. From Linear to Crowd Innovation - Facilitating the process by compiling and processing hard and soft big data through social media
    Jens Rønnow Lønholdt*, and Borivoje Boskovic**
    This paper addresses the issue of Big Data in relation to knowledge transfer and the new concept of Crowd Innovation. The internet has made a huge amount of data available, which is exponential increasing. Different systems are already available for handling the colossal amount of data. However within the field of research driven innovation we do not yet see the full benefits of this. Consequently it is the aim of this paper to present a strategic perspective on the use of Big Data, when planning and implementing a research driven innovation process. The paper will discuss what is considered as respectively Small, Big, Hard and Soft Data. Furthermore, it will present cases in which the intelligent use of big data is put to beneficial use. The paper will strategically outline where and how the full innovation process from idea to market can benefit from an intelligent use of a combination of hard and soft big data. Finally, the paper will discuss and outline the perspective of how research-based innovation could be moving from push to pull and finally reach the concept of crowd-based innovation. Crowd-based innovation understood as the comprehensive and structured use of social media as important platforms for interaction, exchange of ideas and development of products. There are already cases exemplifying this new approach, which we could call Version 3.0 of research driven innovation.
    *Corresponding Author, LYCEUM Innovation and Process Consultancy, lonholdt@lyceumconsult.dk **M2-Fundraising
  5. Recent advances in (open source) sensometrics analysis tools
    Per Bruun Brockhoff / DTU Compute
    Human perceptual measurements are used as a tool for quality control as well as for product development in both food and non-food industries. In this talk some of the sensometrics methodological development carried out in our group will be presented by presenting an overview of the open source software tools that we recently have made available. This will include the R-packages sensR, ordinal and lmerTest and the stand alone tools PanelCheck (www.panelcheck.com) and ConsumerCheck (available from 2014). PanelCheck, ConsumerCheck and the R-package sensR are specifically designed to deal with sensory and consumer data, whereas the R-packages ordinal and lmerTest are for generel purpose statistical use. One of the benefits of our tools is the ability of combining "statistical" and "Thurstonian" modelling for human perception data enhancing interpretability of analysis results. Focus of this talk will then be on PanelCheck, ConsumerCheck and the R-package sensR, and the R-packages ordinal and lmerTest will be presented in the two following talks.
  6. Social network analysis by non-parametric bayesian relational models
    Morten Mørup / DTU Compute
    Mining for the structure in social networks has become an important problem in order to comprehend and account for the dynamics in these systems. This talk will focus on a relatively new class of modeling tools for social network analysis based on non-parametric Bayesian models of complex networks. This modeling framework is able to 1) adapt to the complexity of the relational data, 2) device generative models for networks, 3) be used to predict missing entries, 4) form interpretable representations of network structure. The models considered in this talk are all scalable and can therefore be used for the analysis of large scale social networks.
Industrial image analysis
 
  1. Applied image analysis for poultry quality control
    Eigil Mølvig Jensen / IHFOOD A/S
    Chicken slaughterhouses process 12,000 birds per hour or more and it is no longer possible for the manual inspection to keep up with. This process can now be automated with digital industrial cameras, computers and image analysis.
    This talk describes the image analysis methods used for shape extraction of the bird and its parts and the subsequent detection of a variety of quality parameters. How is the large number of analysis executed within the requirements of a real-time industrial system? A statistical approach is used in order to use the methods worldwide and also to handle the large biological variation between birds. Data is added continuously to a database that is used to improve the underlying mathematical models and calculate the current performance.
    More info:
    http://www.ihfood.dk/poultry
  2. Toward the additive manufacturing revolution: The ecology of second generation 3D printers
    David Bue Pedersen / DTU Mechanical Engineering
    David Bue Pedersen will during his presentation address some of the obstacles that must be surpassed for the additive manufacturing (3D printing) processes to become the competitive production technology of the future. The geometrical freedom given by additive manufacturing technologies, and the ability to implement production schemes employing a high level of mass customization, up the bar for requirements set for production tolerance verification and product design for mass customization. This challenge is far from trivial, yet a generic approach to surpass this is presented.
    Intensive research on how the additive manufacturing platform of the future will be manifested is a global concern amongst experts. One competitive parameter overthrowing the remnant is the ability for an additive manufacturing platform to handle a variety of processing materials within the same job. The emerge of such technologies is at a dawning state, yet promise to become generic manufacturing platforms with an ability of producing extremely complex electro-mechanical and electro-chemical systems autonomously from CAD data. David address how these emerging technologies can be regarded as universal assemblers, thanks to a technology convergence between the semi-conductor industry and the additive manufacturing industry. This convergence will possibly change the way products are manufactured and traded, and how indeed science fiction is about to become science fact.
Computer Graphics
 
  1. Architects and software: How the software design is changing the way architects are thinking
    Morten Norman Lund / 3XN
    Architects and the building industry have seen a dramatic rise in the use of CAD tools and terms like scripting and simulations are today common terms used when architects talk about the tools used in the design process. The tools used today are more open and less constrained than ever before allowing for computational simulations and optimizations to be carried out within the native programs used by architects. Nobody knows for sure how these technological advances will affect future architecture. However, in this talk we will explore some of the recent developments and how they might affect actual buildings. This analysis will lead us to some educated guesses about how today's trends will influence the houses of tomorrow.
  2. Monte Carlo Rendering and Intel's Embree
    Petrik Clarberg / Advanced Rendering Technology Team, Intel Corporation
    Computer rendering is important in many applications, for example, visualization of products and architectural designs, special effects, virtual reality, and video games. The creation of realistic images requires accurate simulation of how light interacts with different shapes and materials. For this purpose, Monte Carlo methods are commonly used, where millions or even billions of random light rays are simulated to create an image. The rays are typically incoherent, i.e., they follow vastly different paths through the virtual world. Intel's Embree ray tracing kernels provide highly optimized open source code for tracing such incoherent rays, enabling the simulation of many millions of rays per second on a single processor. This talk will give an introduction to Monte Carlo rendering and discuss the design of Intel Embree.
  3. Multiphase fluid simulation
    Marek Krzysztof Misztall / Niels Bohr Institute, University of Copenhagen
    In this talk I am going to describe the optimization-based, finite element method for fluid simulation on unstructured meshes. This is the first such method to operate on a kinetic computational mesh, which yields several benefits, in particular: accurate treatment of surface tension related phenomena, plausible interaction with arbitrary rigid boundaries and capacity for simulating several, interacting fluid phases. I am also going to demonstrate multiple examples of fluid animations generated using this simulation method.
  4. Plausible real-time rendering for ship simulation
    Artem Kuznetsov / FORCE Technology Russia
    Visualization of the world - from a simple sea scene to a modern port with high traffic - becomes more complex when it comes to a ship's full bridge simulators. The educational purpose of a simulator shifts importance from visual beauty to physical correctness and scene customizability in order to ensure that the instructor can reproduce any place and situation required for training. This imposes a set of requirements which are unusual for a visual game engine. This presentation will describe challenges and tasks pertaining to visualization in a ship simulator, such as multichannel rendering, large field of view, use of geoid model of the Earth, fully controlled position, time and weather conditions. The solutions described form the basis of the visual subsystem of theSimFlex Simulator. Short videos recorded at FORCE Technology can be found at http://www.youtube.com/user/forcetechnology?feature=watch
  5. The computer as co-designer: Potentials and pitfalls in digitalized architecture
    Søren Nielsen / Vandkunsten
    The digital media introduced to the building industry during the last decades have imposed unexpected forces of transformation, leaving the industrial ecology in a stage of permanent disturbance. New specialised niches are emerging faster than digital processes replace traditional professional fields. Temporary balances are established between new predators and preys, between new parasites and hosts. However, many of the most productive and promising potentials of the new media are yet to be employed or even discovered, while technological children's deceases are still flourishing. Most paradoxically, the productivity in the entire building industry has been steadily falling since the breakthrough of digitalisation, which inevitably raises the question if more IKT-standardisation is the right answer to this challenge or if is it rather putting out fire with gasoline?
  6. Toward the additive manufacturing revolution: The ecology of second generation 3D printers
    David Bue Pedersen / DTU Mechanical Engineering
    David Bue Pedersen will during his presentation address some of the obstacles that must be surpassed for the additive manufacturing (3D printing) processes to become the competitive production technology of the future. The geometrical freedom given by additive manufacturing technologies, and the ability to implement production schemes employing a high level of mass customization, up the bar for requirements set for production tolerance verification and product design for mass customization. This challenge is far from trivial, yet a generic approach to surpass this is presented.
    Intensive research on how the additive manufacturing platform of the future will be manifested is a global concern amongst experts. One competitive parameter overthrowing the remnant is the ability for an additive manufacturing platform to handle a variety of processing materials within the same job. The emerge of such technologies is at a dawning state, yet promise to become generic manufacturing platforms with an ability of producing extremely complex electro-mechanical and electro-chemical systems autonomously from CAD data. David address how these emerging technologies can be regarded as universal assemblers, thanks to a technology convergence between the semi-conductor industry and the additive manufacturing industry. This convergence will possibly change the way products are manufactured and traded, and how indeed science fiction is about to become science fact.
  7. WebGL path tracing - benchmark and challenges
    Thomas Kjeldsen / Computer Graphics Lab, Alexandra Instituttet
    WebGL is a technology that enables GPU accelerated 3D graphics in a webbrowser. In this talk, we will demonstrate how WebGL can be used to utilize the GPU for real-time raytracing. Our aim is to develop a full-featured raytracer that correctly captures global illumination, supports multiple types of materials, and is able to render scenes consisting of thousands of polygons directly in a browser window. We will discuss some of the challenges that we encountered when using WebGL for general purpose computation tasks such as raytracing.
Sponsors Dalux ITMAN DTU Digitalarts DTU Food Medico Innovation