Mossbridge | Psi Performance on Four Online Tests

Forced-choice Psi Performance on Four Online Tests as a Function of Multiple Factors

Julia Mossbridge, Mark Boccuzzi, & Dean Radin

We tested psi performance in four online forced-choice tasks designed to assess precognition and micro-psychokinesis on a random number generator. We used a trait-analysis approach to examine the relationship between psi performance and various demographic, personality, and target factors. The trait-analysis approach is not new to psi and has been used over the past four decades with varying results. Drawing from this work, we expected that psi performance would be revealed as a small effect and that gender, psi belief, and target richness or target interestingness would correlate with performance. We also expected that effects would sometimes be in the direction opposite of conscious intention. Traditionally called psi missing, we call such effects “expectation-opposing.”

As computational power and the availability of participants have increased with the advent of online experiments, so has our capacity to examine what factors might influence performance on different tasks. The performance we describe in this presentation was obtained from two psi-testing platforms, one an iOS smartphone app and the other a website. The smartphone app contained three “games,” which were designed to measure micro-psychokinesis, conscious precognition, and unconscious precognition. The website presented a fourth task designed to measure conscious precognition performance in the form of a precognitive remote viewing task. We used parametric null hypothesis significance tests, including multiple linear regression and t-tests, to compare performance on these tasks against chance and also to determine how the factors we examined were related to task performance. These factors were self-reported age, gender, psi belief, psi confidence, Big-5 personality type, and target interestingness. Where clear effects were found, we pre-registered confirmatory analyses for a portion of the data set that had not previously been examined.

Overall, our hypotheses were confirmed, and we also discovered additional effects. In this talk, we will present new analyses not discussed in a previous (2019) talk. Analyses of data from 5,908 individual logins and 1,001,427 trials obtained between 2018 and 2020 revealed a rich complexity of performance patterns, indicating that psi performance was influenced by virtually all of the factors we explored. Specifically, our key findings were: 1) significant expectation-opposing effects, with a confirmatory pre-registered replication of an expectation-opposing effect on a micro-pk task, 2) significant relationships between performance and psi belief, 3) significant relationships between performance on three of the four tasks with gender, 4) apparent strategy differences between men and women, where men likely used a micro-pk-focused strategy for multiple tasks while women use different strategies that depend on the task, 5) significant relationship between timing and target interestingness with precognitive remote viewing performance, with a confirmatory pre-registered replication of the interestingness effect.

We will discuss these results and their interpretations, then describe our recommendations for future attempts to better understand performance on online forced-choice psi tasks, a recommendation strategy for which we created the acronym SEARCH: Small effects, Early and exploratory, Accrue data, Recognize diversity in approach, Characterize don’t impose, and Hone in on big results.

—-

Join the SSE to support to support the Society’s commitment to maintain an open professional forum for researchers at the edge of conventional science: https://www.scientificexploration.org/join

The SSE provides a forum for original research into cutting edge and unconventional areas. Views and opinions belong only to the speakers, and are not necessarily endorsed by the SSE.

Published on May 10, 2022

Share