AI Study Probably Didn’t Disprove Determinism. Maybe.

I use AI at work mainly for writing test questions for medical students, residents, etc. I also use it to write letters to insurance companies when medications for patients get denied. It is a great data scrubber if I need a short description of a medical condition. My AI of choice is typically Google’s Gemini.

I am not an AI doomer. AI is great. It also is pretty dumb about many things. Journal references are often (and STILL) incorrect. It can write some code but not always complex stuff. I’m not as of yet worried about it taking over my job or being a Terminator.

I listened to a great podcast about AI this morning. The guest on the show talked about a peer-reviewed journal article showing where AI is really at in regards to predictive capability. This article is open access.

In summary, data from the Fragile Families and Child Wellbeing Study was utilized to determine life outcomes in children coming from homes with significant financial / social stress. This study is based on data from 750 journal articles. A total of 12,952 variables collected from these articles were used. 12,952! Whoa!

Somehow the authors managed to invite 160 teams to come up with machine-learning algorithms for predicting outcomes using these variables. Again, 750 peer-reviewed journal articles; 12,952 independent variables; 160 machine-learning algorithms!

The results per the study authors: “Once the Fragile Families Challenge was complete, we scored all 160 submissions using the holdout data. We discovered that even the best predictions were not very accurate…In other words, even though the Fragile Families data included thousands of variables collected to help scientists understand the lives of these families, participants were not able to make accurate predictions for the holdout cases. Further, the best submissions, which often used complex machine-learning methods and had access to thousands of predictor variables, were only somewhat better than the results from a simple benchmark model that used linear regression (continuous outcomes) or logistic regression (binary outcomes)…

AI prediction in this study did not work well. It perhaps was a bit better in predicting outcomes compared to standard statistical testing. However, it was terrible at global outcomes of the individuals studied.

What does this mean? I think of two ideas to expand upon.

  1. AI currently is not the behemoth that we are worried about. It is not a “paperclip maximizer.” I do wonder if the AI talk these days is producing an economic sector bubble. Of course, I am not an economist.
  2. However, I like to think that this study puts determinism in a bit of a spot. Determinism v. free will is a bit of a Sisyphus issue. We debate, debate, debate, and the debate never ends. I would argue that if massive data modeling is not significantly globally predictive as in the Salganik, et al. study above, then perhaps we humans (and other organisms) can choose our outcomes within genetic, epigenetic, chemical, and physical limits.

Theologically speaking, if one accepts the ideas surrounding open & relational theology (ORT), then God wants all entities to have freedom of experience and of choice. I realize an electron has very limited choices; humans have a panoply of choices. This freedom leads to creativity — whether a production of an electron cloud or the producing of the Mona Lisa.

Electron cloud images from https://wordpress.com/post/johnfpohlmd.blog/357

“Mona Lisa” by Leonardo da Vinci

This study in PNAS is in the scientific category of computer engineering / AI development. We can think about the metaphysical implications of this study as well.

image created by Open AI

Published by John Pohl

Professor of Pediatrics (MD), University of Utah DThM, Northwind Theological Seminary Professionally, I’m an academic pediatric gastroenterologist. I’m very interested in research evaluating the intersection of science and religion.

Leave a comment