JMLR Special Topic on Causality
Guest editors (in alphabetical order):
Constantin F. Aliferis, Vanderbilt University,
Gregory F. Cooper, University of Pittsburgh,
Andre Elisseeff, IBM Research,
Isabelle Guyon, Clopinet,
Peter Spirtes, Carnegie Mellon University.
The Journal of Machine Learning Research (www.JMLR.org ) is soliciting papers on all aspects of causality in machine learning, including theory, algorithms and applications.
Causality refers to a set of methods to predict the
consequences of given actions and to reveal the underlying structure of a data
generating process. Specific areas relevant to this JMLR special topic include,
but are not limited to:
a. Methods to discover causal structure from data and to perform causal inference (e.g., estimate causal effects, predict effects of actions, produce most probable causal explanations, perform inference with counter-factuals, etc.). Methods based on the use of multiple types of data (e.g., observational, experimental, case control) and methods based on combining knowledge (e.g., in the form of constraints or prior beliefs) and data, are encouraged.
Such methods may be based on Bayesian Networks and other Probabilistic Graphical Models, Markov Decision Processes, Structural Equation Models, Propensity Scoring, Information Theory, Granger Causality, or other appropriate frameworks.
- Operational definitions of causality suitable for practical causal discovery.
- Formal criteria (e.g., statistical tests of significance of causal relationships, model scoring measures.) for causal model selection.
- Properties (e.g., soundness/consistency, stability, sample efficiency, computational efficiency) of existing and novel causal discovery methods.
- Statistical complexity and feasibility of learning causal relationships under different assumptions.
- Formal connections relevant to causal discovery among diverse fields such as Artificial Intelligence, Decision Theory, Econometrics, Markov Decision Processes, Control Theory, Operations Research, Planning, Experimental Design theory, etc.
c. Characterization of causal interpretability of non-causal machine learning and statistical methods, especially feature selection methods using theoretical and empirical approaches:
- Characterizing major existing and novel causally and non-causally-motivated feature selection methods in terms of causal validity.
- Studying the concept of relevancy and its relationship with causality.
- Causal feature selection methods with improved computational performance and accuracy suitable for large dimensional problems and/or small sample sizes.
d. Assumptions for causal discovery. Theoretical and empirical study of:
- Study of violations of typical assumptions for causal discovery (e.g., Causal Faithfulness Condition, Causal Markov Condition, Causal Sufficiency, causal graph sparseness, linearity, specific parametric forms of data distributions, etc.).
- Prevalence and severity of violations of assumptions and study of worst-case and average-case effects of such violations.
- Novel or modified assumptions and their properties.
e. Evaluation methods, including the study of appropriate performance measures, research designs, benchmarks etc. to empirically study the performance and pros and cons of causal discovery methods.
f. Real-world applications and benchmarking of causal discovery algorithms, including rigorous studies of highly innovative software environments for causal discovery.
1. Follow the standard paper preparation and submission of JMLR and upload your paper to http://jmlr.csail.mit.edu/manudb/
2. Send notification (including the JMLR submission number and a copy of your paper) to the editors at email@example.com
3. Indicate in the cover letter that submission is intended for this special topic.