Data Mining on High Volume Simulation Output
Modern computer-aided simulation tools used by various industries produce Gigabytes of data, but can currently take days and even up to weeks of computation effort. To make the best use of all this data, The DAMIOSO project focuses on developing algorithms and tools for managing, mining, learning, and optimization for such massive volumes of data. Computer-aided simulation tools play a central role in modern design processes for various industries such as the automotive industry. They provide feedback to the designers without the overhead of, for example, creating a real-life model for wind-tunnel testing. In the near future, industry is likely to use even more complex simulations, increasing the amount of generated data from Gigabytes to Terabytes. To usefully manage these volumes of data, new automated methods are needed. The DAMIOSO project aims to create such methods, with a focus on three main topics: data storage, knowledge extraction and automated optimization. Combined, this leads to a toolbox of general-purpose methods that can be used for many applications.
- Emmerich M and Deutz A (2014), "Time Complexity and Zeros of the Hypervolume Indicator Gradient Field", In EVOLVE - A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation III. Vol. 500, pp. 169-193. Springer International Publishing.
- Sosa Hernandez VA, Schuetze O and Emmerich M (2014), "Hypervolume Maximization via Set Based Newton's Method", In EVOLVE - A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation V. Vol. 288, pp. 15-28. Springer International Publishing.