Master data

Integrated Development 4.0
Description:

Task 1.2.1 – Capture the Competences and information Flow:

Development of intelligent statistical data pre-processing methods for semiconductor manufacturing. Probabilistic graphical modeling, in close cooperation with the group of Prof. Reiner, will be used to infer the dependence structures and dimension reduction schemes.

Task 1.2.2 – Dynamic Knowledge Update

  • Development of intelligent learning algorithms to extract relevant information out of big data sets with a focus on adaptive and networked (smart) production systems, in cooperation with the group of Prof. Reiner. Particular attention will be paid to Bayesian regularization methods
  • Extract key parameters for process control from results of Statistical Machine Learning and Bayes Deep Learning algorithms, in close cooperation with the KnowCenter group
  • Development of Bayesian ensemble filtering and data assimilation methods incorporating the observations and process dynamics through sequential posterior updating (modified Bayes Kalman filters), Monitoring of probability distributions as data summaries instead of only using selected key numbers of raw dat
  • Combination of methods of active learning and model choice to take account of covariate shift (Bayes statistical learning in non-stationary environments)

Task 1.2.3 – Knowledge Validation

  • Validation of KPIs for decision making support with a focus on adaptive and networked (smart) production systems, in close cooperation with the group of Prof. Reiner:Validation of key parameters to increase acceptance of data driven methods in semiconductor manufacturing environment
  • Validation of knowledge about dependencies between advanced dynamic screening methods and production system performance: In particular, we will investigate the use of empirical Bayes estimation of posterior probabilities of enrichment for controlling the False Discovery Rate (FDR)

Task 1.3.2 – Data Driven Methods (AI, Deep Learning, Black-Box Modeling, etc.)

  • Development and integration of data driven methods to support and enable an effective root cause analysis of yield loss (Functional ANOVA Decompositions, Approximate Inference Algorithms)
  • Development of novel Bayesian variational and perturbation methods and their integration into structured predictors and deep learners of production performance characteristics (Bayes deep learning)

Task 1.3.3 - Validation of AI approaches:

Jointly with KAI and the group of Prof. Reiner, we will work on the validation of the implemented routines from Task 1.3.2 by comparing expert results and results of data driven methods with regard to accuracy, robustness, etc.

With regard to the joint work on the tasks within UC1 (WP 1, 4), we will further focus on relating (raw data) machine parameters and SPC parameters through Canonical Correlation Analysis and the choice of a “best” set of training data.

Keywords: Machine Learning, Data Science, Statistical Learning, Bayesian Deep Learning, Graphical Modeling
Short title: iDev40
Period: 01.05.2018 - 31.12.2022
Contact e-mail: -
Homepage: -

Employees

Employees Role Time period
Gerald Reiner (internal)
  • 01.05.2018 - 31.01.2019
Jürgen Pilz (internal)
  • 01.05.2018 - 31.12.2022
Konstantin Posch (internal)
  • Research staff
  • 01.06.2018 - 30.04.2021

Categorisation

Project type Research funding (on request / by call for proposals)
Funding type §27
Research type
  • Applied research
Subject areas
  • 502051 - Economic statistics
Research Cluster
  • Humans in the Digital Age
Gender aspects Genderrelevance not selected
Project focus
  • Science to Science (Quality indicator: I)
Classification raster of the assigned organisational units:
working groups
  • Statistik für Halbleiter

Funding

Cooperations

No partner organisations selected