/r/causality
Causality (also referred to as causation, or cause and effect)
Causality is the natural or worldly agency or efficacy that connects one process (the cause) with another process or state (the effect), where the first is partly responsible for the second, and the second is partly dependent on the first. In general, a process has many causes, which are said to be causal factors for it, and all lie in its past. An effect can in turn be a cause of, or causal factor for, many other effects, which all lie in its future. Causality is metaphysically prior to notions of time and space.
Causality is an abstraction that indicates how the world progresses, so basic a concept that it is more apt as an explanation of other concepts of progression than as something to be explained by others more basic. The concept is like those of agency and efficacy. For this reason, a leap of intuition may be needed to grasp it. Accordingly, causality is implicit in the logic and structure of ordinary language. https://en.wikipedia.org/wiki/Causality
Related subreddits:
/r/causality
Noise has been found to enhance the emergence of interesting dynamical behaviors in some nonlinear physical systems. However, for any given system, a universal method for designing appropriate noise forms does not exist. This work proposes a theory-guided AI framework to design artificial noise capable of inducing energy-saving complete synchronization in any coupled nonlinear physical systems. This AI framework ensures that noise-induced synchronization is valid not only in the vicinity of the synchronization manifold but also far away from it. This achievement surpasses any traditional investigations. Consequently, this work opens broad avenues for regulating real physical systems using an appropriate amount of AI-designed noise.
Noise has been found to enhance the emergence of interesting dynamical behaviors in some nonlinear physical systems. However, for any given system, a universal method for designing appropriate noise forms does not exist. This work proposes a theory-guided AI framework to design artificial noise capable of inducing energy-saving complete synchronization in any coupled nonlinear physical systems. This AI framework ensures that noise-induced synchronization is valid not only in the vicinity of the synchronization manifold but also far away from it. This achievement surpasses any traditional investigations. Consequently, this work opens broad avenues for regulating real physical systems using an appropriate amount of AI-designed noise.
Has anyone used econml's CausalAnalysis object? Wanted to check if there are interpretation of results from that object
Hi, I have a dataset of around 20k customer which are exposed to various treatments(>1) and some customers which are not exposed but they are from past months. So the questions are :
I am new to Causal Inference
Dear community, I'm new to the field of causal reasoning, and was wondering what conferences are there on the subject.
To give context:
Say, By one of various causal discovery methods, I try to find the causal graph for data of one hour, I need to update my causal graph for every hour. I need to rerun the algorithm again for the 2 hours of data so that I don't miss the relations from the previous hour. Are there any papers or update methods where there is no need for rerunning the algorithm and where only some of the coefficients or weights are updated?
Hey there, Causality Experts!
Do you have hands-on experience in the creation and application of causal diagrams and/or causal models? Are you passionate about data science and the power of graph-based causal models?
Then we have an exciting opportunity for you!
We - the HolmeS³-project - are conducting a survey as part of a Ph.D. research project located in Regensburg (Germany) aimed at developing a process framework for causal modeling.
But we can't do it alone - we need your help!
By sharing your valuable insights, you'll contribute to improving current practices in causal modeling across different domains of expertise.
You'll be part of an innovative and cutting-edge research initiative that will shape the future of data science.
Your input will be anonymized and confidential.
The survey should take no more than 25-30 minutes to complete.
No matter what level of experience or field of expertise you have, your participation in this study will make a real difference.
You'll be contributing to advancing the field and ultimately making better decisions based on causal relationships.
Click the link below to take our survey and share your insights with us.
https://lab.las3.de/limesurvey/index.php?r=survey/index&sid=494157&lang=en
We kindly ask that you complete the survey by May 2nd 2023 to ensure your valuable insights are included in our research.
Thank you for your support and participation!
Hi, looking for which unis in the uk have a strong research presence in causality, at the postgrad level.
I'm working with a large time-series dataset of smart building sensors (~3000). Is it possible to perform any kind of CD on this (most datasets only have N<100), and if I could recover a graph, how could I check it without knowing the ground-truth DAG?
Experts' intervention is required to create a causal graph. Is there any way we can create possible causal models using some automation? In some cases this can be useful.
Does anyone knows a good source which I can use to implement do-operator in Causality. It would be really helpful if someone shares some good link. Thank you in advance!
Hi redditors,
I'm new to the field of causality, in particular causal discovery (learning the structure, not the effects, of a causal graph, i.e. edges and their direction amongst variables).
I have a question about interventions that I intuitively answer, but cannot find a precise demonstration on papers (on the contrary, I found mentioning the opposite in a talk by a causal discovery expert)
Should multiple interventions be carried out mutually exclusively?
Assume the following setting (have faith :D):
Is it correct to say that, without any knowledge about the ground truth causal graph, the agents would need to intervene one at a time?
My intuition sees an intervention (within this context) as manipulating an actuator device all other conditions being equal, is this correct?
https://github.com/soelmicheletti/cdci-causality
I implemented a pipy package with a simple, yet effective, method to identify the causal direction between two variables. Check-it out!
It is a slightly modified version of the “Bivariate Causal Discovery via Conditional Independence” paper (https://openreview.net/forum?id=8X6cWIvY_2v). I’m working on an improved algorithm for binning, stay tuned for the new release!
A recent work published in RESEARCH on dynamical causality detection using time series:
https://doi.org/10.34133/2022/9870149 Continuity Scaling: A Rigorous Framework for Detecting and Quantifying Causality Accurately
If it is always the present moment, i.e. the Now, how can there be before and after? I cannot square this circle.
I am on the search for material on interactions between studies of time series and studies of causality. Interested in both directions of this link: finding causal influences in time series data but also to the more philosophical view that the time dimension us a big part of a causal relationship (the cause happens before the effect). For example, one can imagine that progress in machine learning can offer new tools to the field of causality. Reading "The book of why", I found a couple of mentions to time series which basically said that it's better to have controlled experiments rather than time series data which often hide spurious corrélations. I'd take that as a "pessimistic" view on this link, curious if someone else has talked about this subject, especially the temporal aspect of cause and effect
Is finding a Markov blanket/boundary a good way of creating a causal model? Basically finding a count of independent random variables that cause a dependent random variable to change?
Hi! Just found this sub and thought you all might enjoy this short play I wrote in 2012 called “Effect and Cause”. It runs in reverse, but the audience didn’t know. It was fun to watch the ripple of realization move through the crowd as the play progressed. I hope you like it…!
https://www.ineffable-solutions.com/_files/ugd/6f08db_5ae4f049fda44a1b9da6b0815cc8ef39.pdf
Apologies if the question is unclear, I'm not too familiar with causal inference.
I've been using a few different methods to estimate causal effects for an outcome variable through Microsoft's DoWhy library for Python. Despite using different methods (propensity backdoor matching, linear regression, etc.), the causal estimates are always very similar to a naïve estimate where I just take the difference in outcome means between the treated and untreated groups. I've used the DoWhy library to test my assumptions through a few methods of refuting the estimates (adding random confounders, removing a random data subset, etc.) and they all seem to work fine and verify my assumptions, but I'm still worried the estimates are wrong due to their similarity to the naïve estimates that don't take into account any possible confounding variables/selection biases.
Does this mean there's a problem with my causal estimates, or could the estimates still be fine? If there's a problem, is there any way to check whether it has something to do with my data (too high dimensionality), the DAG causal model I've created, or something else?
Hello fellow Data Scientists,
I have mostly worked with Observational data where the treatment assignment was not randomised and I have used PSM, IPTW to balance and then calculate ATE. My problem is: Now I am working on a problem where the treatment assignment is randomised meaning there won't be a confounding effect. But each the treatment and control group have different sizes. There's a bucket imbalance. Now should I just use statistical inference and run statistical significance and Statistical power test?
Or shall I balance the imbalance of sizes between the treatment and control using let's say covariate matching and then run significance tests?