Visionday 2014 Programme
Programme top
= Industrial image analysis,
= Technologies for life science,
= Computer graphics,
= Medical image analysis
ABSTRACTS
Keynote
 
  1. Manchester X-ray Imaging Facility
    Philip J. Withers / Manchester X-ray Imaging Facility, School of Materials, University of Manchester
     
Technologies for Life Science
 
  1. In silico screening of absorption enhancers
    Søren H Welling, DTU/NOVO
    Enhancement of absorption is central for improving the oral bioavailability of therapeutic peptides. The focus of the present PhD project is to understand and identify the structural properties important for absorption enhancement mechanisms together with essential properties of relevance for tablet formulation. It is the idea to apply statistical- and machine learning tools to extract and comprehend more information from the immense volume of published data on absorption enhancers.
    Here, initial work on prediction of absorption enhancement will be presented and how quantitative structure activity relationships (QSAR) can facilitate effective screening of new excipients for medicines.
    The modelling was based on a dataset of surfactant-like compounds from literature and molecular descriptors calculated from the structure of the compounds. The models of absorption enhancement had been verified through internal cross validation, experimental in vitro testing and literature matching predictions. Thus with such verified QSAR models it was possible to sufficiently captivate a highly complex biological system of absorption for screening purposes when focusing on the central surfactant-like effect of many enhancers.
    In this exploratory QSAR modelling the plausible useful molecular descriptors often outnumbers the molecule examples in literature. When available variables (p) is larger than number of observations (n) the dataset is sparse (p>n) and e.g. regular linear regression is not feasible, as too many possible explanations are available to describe the fewer observations and reproducible conclusions are unlikely to be drawn. Methodologies handling sparse data comprise pre-descriptor filtering, regularization, decompositions or ensembles and are useful to build reproducible non overfitted models. This in silico approach opens the possibility for high throughput screening of new absorption enhancers and a novel method for the physicochemical optimization of surfactant enhancer systems.
  2. Recommendation Systems at Issuu
    Morten Arngren / Issuu
    Issuu.com is a website platform for publishing and reading digital publications for free and is often described as the YouTube for digital publications. With more than 16 mio. free publications the quality of the recommendation system of this ecosystem is vital. The recommendation system at Issuu is customized to be both precise to serve relevant content and diverse to provide serendipity at the same time. In this talk we will present the challenges of designing such a system and how these can be alleviated from the system architecture, the training algorithm and serving actual recommendations.
  3. Life-cycle Wealth Management for Individuals
    Kourosh Marjani Rasmussen / Schantz and DTU Management
    Life-cycle financial planning for household deals with the optimal accumulation and depletion of wealth. In the accumulation period the major concern is allocation of available savings into a retirement environment, free asset environment or repayment of debt given that the returns on savings and interest rates on loans as well as household income are uncertain. In the depletion period the planning includes the order in which different saving accounts are depleted in the face of longevity risk. Over both periods it is all important that tax rules and calculation of welfare benefits are taken into account explicity for a given jurisdiction. We suggest an optimization-simulation framework to determine the right trade-off in between model realism and a sufficient degree of explicit risk-modelling.
  4. Identification of Absorption Enhancers from Scientific Articles and Patents using text-mining (I2E - Interactive Information Extraction)
    Sten B. Christensen / Novo Nordisk
    Goal: To generate a data set of known Absorption enhancers (AEs) from literaure. A large number of patents, full text articles and article references were text-mined (NLP) using a commercial text-mining application (I2E from the UK company Linguamatics). The data set of AEs from literature/patents is being used for in silico screening of absorption enhancers.
    Methods: Linguamatics' I2E knowledge discovery platform combines four key capabilities to enable users to rapidly extract relevant facts and relationships from large document collections:

    • Natural Language Processing (NLP): using linguistics to quickly interpret the meaning of unstructured text sources.
    • Search engine approach: where users can define and refine ad hoc queries interactively, returning results in real time.
    • Intuitive reporting: presenting extracted information with drill-down to supporting evidence.
    • Domain knowledge plug-in: providing enhanced semantic search capabilities using domain knowledge such as taxonomies, thesauri and ontologies.
    Using the capabilities in I2E, queries were formulated in order to extract small molecule AEs from the document collection.
    Results: The main benefit of text-mining in order to identify AEs, compared to traditional search and manual processing of the found results, is saving time. Thus you are able to process larger document collections and thereby possibly generate a larger data set for further processing.
    Conclusions: The method were able to extract AEs with a reasonable precision, enough to speed up the workflow compare to traditional search and manual processing.
  5. The Quest of Hyperspectral Imaging in Pharmaceutics for PAT Implementations
    Jose Manuel Amigo Rubio / University of Copenhagen, Department of Food Science
    The use of computer vision systems to control solid dosage forms manufacturing processes and product quality in a nondestructive manner has become increasingly important in the pharmaceutical industrial processing. Motivated to meet the expectations of the US Food and Drug Administration (FDA), the pharmaceutical companies are investing loads of effort and resources to implement what is called the process analytical technologies (PAT) methodologies to quality assurance in the pharmaceutical industry. The core of the PAT initiative is the increased process understanding by monitoring of critical performance attributes, leading to better process control and ultimately improved drug quality. Several years ago, this motivation put forward the usage of hyperspectral devices and chemometrics to increase the control of the final quality assessment in production lines. So far, though, very few examples are found in the literature of implementing hyperspectral devices in production lines. This talk will presents the main benefits and drawbacks that pharmaceutical companies are meeting when they need to implement hyperspectral vision systems in production lines and will open the discussion of the real needs for their implementation.
Industrial Image Analysis
 
  1. Bio-Medical Imaging at Max-IV
    Martin Bech / Lund University
    Phase-contrast x-ray imaging has recently been proven to give improved contrast in soft tissue samples. In particular with highly brilliant synchrotron radiation, different approaches with coherent x-rays can give very good contrast with resolution ranging from micrometers to a few tens of nanometers. This is an excellent tool for studying three-dimensional morphology in a non-destructive way. In the northern part of Lund, the Max-IV synchrotron is under construction and will be ready for the first scientific experiments in 2016. But what kind of science can we do at Max-IV? One of the beamlines to be installed at Max-IV is the MedMax beamline for bio- and medical-imaging. I will discuss previous examples of biomedical synchrotron phase-contrast imaging experiments and the possibilities opening at Max-IV.
    Coronal views of mouse abdomen
    Figure: Coronal views of mouse abdomen: (A) cryo-sliced image, and virtual cut through (B) phase-contrast tomography and (C) attenuation CT. The stomach(st), liver(li) and intestines(in) are labeled. The scale bar indicates 5 mm. (Tapfer, Bech et al. 2014 J. of Microscopy, 253(1) 24-30)
  2. Measuring Radiometric Properties and the Appropriateness of Existing Analytic BRDF Models
    Jannik Boll Nielsen / DTU Compute
    We will address the need for parsimonious (i.e. good, low parameter) radiometric models, when measuring material reflectance properties. That is that we in essence only have few measurements of a material's radiometric properties in many practical applications, e.g. in visual quality control and reverse engineering.
  3. 2D Static Light Scattering for Dairy Based Applications
    Jacob Lercke Skytte / DTU Compute
    2D Static Light Scattering (2DSLS) is a novel hyperspectral (~450-1030 nm) optical technique, from which multiple light scattering phenomenas can be observed and related to the microstructural properties of an investigated sample. Furthermore, the technique is remote and non-invasive, which potentially makes it suitable for in-line process control applications. The talk provides an introduction to the 2DSLS technique, and covers the basics as well as ongoing work. Finally, a case study will be presented where 2DSLS is applied in relation to protein microstructures in stirred yogurt products.
  4. GPS/GNSS-based Positioning, Navigation and Timing - Status and Evolution
    Anna B.O. Jensen / AJ Geomatics
    GPS as a system for positioning, navigation and timing is today very well known for use with for instance smart phones, car navigation systems, and air plane navigation. But there are more global navigation satellite systems (GNSS) than the American GPS. The Russian GLONASS system which is operational, along with the development of the European Galileo and Chinese Beidou systems are causing rapid development in field of positioning and navigation applications. This presentation will present the current status of the satellite systems and the performance obtainable with user applications. Also, the evolution expected during the next few years will be reviewed with a focus on the European Galileo, and on the advantages that can be obtained by combining GPS with the other global navigation satellite systems.
  5. A Glimpse Through The Letterbox: Quality Inspection of Lyophilized Product in The Pharmaceutical Industry
    Kartheeban Nagenthiraja / InnoScan A/S
    Background: Lyophilization (Lyo) of parenteral drugs (freeze-drying) extends the shelf life of pharmaceutical products, hence the preservation process has gained application over the recent years. Lately, InnoScan was requested to develop a robust method to detect dark particles on the top of the lyophilized mass the so-called 'lyo cake'. The main challenge in this request was not detecting the dark particles, but rather avoiding false detection of acceptable product variations. Lyo cakes are typically shrunken, cracked and has multiple-sized crevices, which potentially could be detected as false error. Furthermore, the task was complicated by the design of the vial; a non-transparent cap covering the shoulder of the vial restricted the view of the cake surface. To not compromise the investment, a low false rejection rate was pivotal for the customer. Overall, the aim of the project was to develop a robust vision based technology to detect dark particles on the surface of lyo cakes, while maintaining an acceptable specificity.
    Method: To facilitate image acquisition of the lyo cake we combined line-scan technology with the flexibility of rotating the vial. The challenge we faced as a consequence of the narrow acquisition window is similar to looking through a letterbox to describe the pattern of the mat. The letterbox provides a partial view of the mat; however if the mat could be rotated 360° incrementally and simultaneously acquiring images of the view; a complete summary of the mat can be composed. In general terms, we used this approach to imaging the lyo cake surface, and the surface image was subsequently analysed by an algorithm to detect particles. Differentiating between dark particles and crevices in the cake is the challenging task for the detection algorithm. Applying collimated light to the surface gives a distinct reflection pattern depending on the target; crevices or particle. Using the rotation, reflection patterns from the objects, can be observed at multiple angles and by autocorrelation the particle can be detected, due to uniform reflection pattern despite angle of view. The detection technique was tested using 100 vials of which 20 of them contained a dark particle at the surface. The performance of developed technology was compared with manual inspection carried out by trained personnel.
    Results: The implemented technology executes well in test and demonstrates a better performance than manual inspection, which is the 'golden' standard prescribed by the regulatory authorities. Specifically, the automated technology was superior in differentiating between micro crevices and dark particles. Furthermore, the line handling speed of the machine corresponding to 350 vials/minute, substitutes manual work force of 87.5 persons.
    Discussion: To comply with regulatory demands an automated inspection must show equal or better DR in comparison to manual baseline. Our automated technique to detect dark particles on lyo cakes demonstrated better performance than manual inspection and will been implemented at the customer's production site.
  6. Vision Systems for Glassworks
    Jørgen Læssøe / JLI Vision A/S
    Glassworks are mass producing factories. On some production lines, the speed exceeds 10 parts pr. second. Therefore, automatic inspection is essential. Inspection is traditionally done in the cold end, but can also be applied in the hot end, just after the glass forming. This presents quite a few challenges. The environmental protection requires a lot of engineering to keep away the heat, and to protect the optical parts from oil contamination. Inspecting in the hot end means that the machine operators get instant feedback when one of the many tools in the forming process drifts. Without hot end inspection you have to wait for up to one hour before the cold end equipment reports the problems. The vision systems must operate at a high speed and find details smaller than 0.05 mm2. As the glass is red hot and soft it is not possible to do any handling, and the vision system must work well without alignment of the tableware or containers. Inspecting tableware requires many backlighting patterns to reveal the different optical defects. To achieve this, a dynamic light box is used. The presentation will discuss the design, programming and installation of these vision systems in the harsh glassworks environment.
  7. Finite Element Modeling of Micro Scale Surface Phenomena
    Mary Kathryn Thompson / DTU Mechanics
    Micro scale surface phenomena can have a substantial impact on the macro scale behavior of engineering systems. Surface metrology data can be used to construct finite element models with real surface roughness. These models can then be used to predict the system behavior and to design functional surfaces. However, the characterization and validation of these models remains a challenge. This presentation provides an overview of finite element surface modeling. It presents two case studies related to fluid sealing and thermal contact resistance. It discusses the relationship between mechanical design and image processing in this context. Finally, it outlines the challenges and opportunities for using statistical and image processing techniques to analyze and compare the results of FE surface models.
  8. Vision for robotics - Why estimating pose uncertainty is important
    Henrik Gordon Petersen / The Maersk Mc-Kinney Moller Institute
    In this talk, examples of the robotic automation challenges in the current project CARMEN ("Center for Advanced Robotic Manufacturing ENgineering") funded by The Danish Council for Strategic Research and in the SPIR-project "MADE - Platform for Future Production". The emphasis will be on examples from assembling parts that initially are randomly located. The assembly can be achieved either by specialized feeders that take the parts from random locations to a well-defined aligned location or by using vision for bin- or belt-picking. In either way, the parts will end at locations which are known up to some pose uncertainty. It will be discussed how these uncertainties are important for the robustness of the execution of the assembly tasks and why it (at least for the future) will be important to have reliable estimates of the 6D pose uncertainty probability distribution.
  9. From Research to Industrial Robot Vision
    Michael Nielsen / TI Odense
    This talk will address the contrasts and challenges between the two worlds of research and industrial robot vision. It is a clash of cultures so to speak with the usual prejudice to overcome. The challenges and processes are different in all stages of the projects. A specialist, which computer vision engineers usually are, need to acquire more general skills and understand the systems that the vision system has to be integrated with. Robot Vision typically consists of two parts; vision for guiding the robot and vision for quality inspection to see what happened after the robot did its work. During the talk examples of both will be presented.
  10. LED spectral imaging
    Jens Michael Carstensen / DTU Compute and Videometer A/S
    Spectral imaging is a very versatile technique for food, pharma and agri product quality assessment. It may simultaneously measure a broad range of food safety and food quality parameters, and this is done rapidly, non-destructively, and without physical contact to the sample. Alternatives are typically time-consuming, labor-intensive, or require highly trained sensory panels. LED spectral imaging uses narrow band LEDs to provide the spectral resolution on a high-resolution monochrome camera sensor by strobing different LEDs into an integrating sphere before the light is diffusely illuminating the sample in a very homogenous way. Compared to hyperspectral imaging, LED spectral imaging does not need movement of the sample, and it generally provides higher spatial resolution and higher speed at the cost of high spectral resolution. One important advantage of LED spectral imaging is that the dynamic range may be optimized for each wavelength simply be adjusting the strobe intensity and/or the strobe length. Another advantage is that it is not dependent on a broad band incandescent light source, which will typically be subject to stability issues.
Medical Image Analysis
 
  1. Drug Dissolution and Release Testing by UV Imaging
    Jesper Østergaard / University of Copenhagen, Department of Pharmacy
    The dissolution and/or the release of a drug substance from the drug formulation are pre-requisite for ensuring the efficacy of a drug product. Thus dissolution and release testing remain a key activity guiding drug formulation development as well as quality control tool. The presentation will focus on the new possibilities that the spatially and temporally resolved data obtained by UV imaging offer in in vitro drug dissolution and release testing studies. A compound sparing UV imaging based approach useful in solid form selection (salt, polymorphs, hydrate, cocrystal) is presented. Recent integration of UV imaging approach with in situ Raman spectroscopy offers simultaneous measurement of drug dissolution rate and solid state phase transformations. In the proof of concept studies, sodium naproxenate and theophylline anhydrate were observed to convert into the more stable solid forms (naproxen and theophylline monohydrate) during dissolution [1]. Hydrogels have been subjected to UV imaging for non-intrusive measurement of drug distribution and diffusion in relation to parenteral administration of drugs. A gel matrix has been used as a simple model of subcutaneous tissue allowing detailed characterization behavior of low-molecular weight drug substance as well as proteins [2, 3]. The case is made that UV imaging may offer detailed insights into drug dissolution and release processes and mechanisms in formulation development otherwise difficult to achieve.

    [1] Østergaard, J., Wu, J. X., Naelapää, K., Boetker, J. P., Jensen, H., & Rantanen, J. Simultaneous UV imaging and Raman spectroscopy for measurement of solvent-mediated phase transformations during dissolution testing. J. Pharm. Sci. 103, 1149-1156. 2014.
    [2] Ye, F., Yaghmur, A., Jensen, H., Larsen, S. W., Larsen, C., & Østergaard, J. Real-time UV imaging of drug diffusion and release from Pluronic F127 hydrogels. Eur. J. Pharm. Sci. 43, 236-243. 2011.
    [3] Jensen, S. S., Jensen, H., Cornett, C., Møller, E. H., & Østergaard, J. Insulin diffusion and self-association characterized by real-time UV imaging and Taylor dispersion analysis. J. Pharm. Biomed. Anal. 92, 203-210. 2014.
  2. Computational Anatomy: Simple Statistics on Interesting Spaces for Developing Imaging Biomarkers Analysis
    Sarang Joshi / University of Utah
    A primary goal of Computational Anatomy is the statistical analysis of anatomical variability. Large Deformation Diffeomorphic transformations have been shown to accommodate the geometric variability but performing statistics of Diffeomorphic transformations remains a challenge. I will start with the simple concept of defining the "Average Anatomy" and then extend this to the study of regression and co-variation of anatomical shape with independent variables. The motivation is to model the inherent relation between anatomical shape and clinical measures and evaluate its statistical significance. We use Partial Least Squares for the multivariate statistical analysis of the deformation momenta under the Large Deformation Diffeomorphic framework. The statistical methodology extracts pertinent directions in the momenta space and the clinical response space in terms of latent variables. We report the results of this analysis on subjects from the ADNI database.
  3. Automated Segmentation of Magnetic Resonance Brain Images Using Bayesian Modeling
    Oula Puonti / DTU Compute
    With the rapid development in the field of automatic whole-brain segmentation, many different segmentation tools have become available for the quantitative analysis of magnetic resonance (MR) brain images. Most of these methods are however aimed for neurological research and not applicable to everyday clinical use. One of the main problems faced in the computational analysis of clinical MR data, is the large variation in the quality and contrast properties of the acquired scans due to different scan sequences, scanner types and even scanner software. This presentation will outline a fast atlas-based segmentation framework that is capable of simultaneously segmenting 41 different structures in the brain, automatically adapts to different intensities in the target scans and is able to handle multi-contrast data. Results are presented along with the theory, and finally some avenues for extending the models to include pathologies are discussed.
  4. Online Image Analysis Applications for Automated Segmentations using Shape Models
    Karl Sjöstrand / Lund University
    Clinical routine - the nirvana of struggling medical imaging application makers working towards daily use of their software at multiple hospitals. But what is the key to reaching this elusive goal? Full automation? Quick computation? Accurate results? In this talk we argue that you need all of them - and much more. Weebeans is a functional imaging application for the investigation of kidney problems in children. It is also a research project focusing on useability in the form of modern user interfaces and tight integration of the application with existing hospital information systems, all aiming towards use in clinical routine. In line with the session topic, we also discuss the surprising connection between clever application distribution and the quality of shape models.
Computer Graphics
 
  1. Fast Procedural Modelling and Rendering - Elementacular!
    Brian Bunch Christensen / The Alexandra Institute, Computer Graphics Lab
    Designing natural phenomena such as clouds and rocks for digital film or game productions is often very costly due to the high complexity of the geometry. Specifically, creating volumetric clouds using fluid simulations often results in an inconvenient process characterized by a very indirect control through unintuitive simulation parameters and extensive computation times. On the other hand, using a purely procedural method makes it hard to tweak certain aspects or areas of the cloud without inadvertently changing others. Both approaches hinder the creative process and result in higher production costs.
    We present a method that allows an artist to work directly on the final cloud (or rock) in real-time. The method converts any (potentially self-intersecting) mesh to a volumetric representation, which is rendered in high quality in order to guide the artist's decisions. The mesh can be sculpted to adjust the overall shape with instant updates to the generated volume, while the fine-scale details can be tweaked using procedural noise.
  2. Tetrahedral Meshes at Work: Simulation, Segmentation, and Optimization
    Asger Nyman Christiansen / DTU Compute
    Imagine you have a 3D shape and want to deform it. For example, you want to simulate the deformation of fluids when subjected to physical forces. Or you want to optimise the shape of a structure to ensure it does not break or that it stands. Or you want to fit shapes such that they segment image data. I will show that using a tetrahedral mesh and the Deformable Simplicial Complex method can be advantageous for these and many more applications. In particular, using tetrahedral meshes go well together with any application which applies the finite element method. Traditionally, a fixed grid and the level set method have been used, however, this requires conversions from an explicit to implicit representation and back again. Here, on the other hand, the surface of the shape is directly embedded and explicitly represented in the tetrahedral mesh. Furthermore, multiple labels, for example to represent multiple materials, are handled naturally. Finally, I will show how simple it is to apply the method using the open source framework available at www.github.com/asny/DSC.
  3. Banding in Games - A Noisy Rant
    Mikkel Gjøl / Playdead
    This talk will explore various causes of banding in games and how to solve them using noise.
  4. Advanced WebGL
    Morten Nobel-Jørgensen / DTU Compute
    In the last year WebGL has become the standard method for showing 3D content in a web-browser. Today around 80% of the personal computers have a WebGL capable browser and graphics card. This talk will focus on some of the new features found in the latest builds of web browsers, including what we can expect from WebGL 2.0. We will try to compare the strength and weaknesses of WebGL compared to traditional OpenGL and finally look into how large-scale C/C++ OpenGL applications can be turned into web applications using the source-to-source-compiler Emscripten.
Sponsors DTU Compute Medico Innovation