BlackWindBooks.com | Newsletter! | risingthumb.xyz | achtung.risingthumb.xyz | github.com/RisingThumb | site map

risingthumb.xyz "True friend stab you in the front."

Academic Science and its problems

Let us begin this by refreshing what is the scientific method. The scientific method is where one makes an observation or question. Follows this with research into the topic. Following this with a hypothesis, and following that with an experiment to test the hypothesis. Data is then analyzed and the conclusions are then reported. The value of the conclusions is based on whether the data and the conclusions made from it can be replicated.

The first issue here is replicability. Somewhere in the region of ~70%[1] of all studies made are simply not replicated. This 70% covers the set of both, attempted replications where the replication doesn't find the same result as well as simply not attempting them. Then there are a lot of studies which simply aren't replicated. This alone is pretty damning. Let us ask then, what is needed for a result to be deemed significant? Statistically, the result must be incredibly unlikely. This is referred to as a P value. However P Values can be hacked by a method called "P-Hacking"[2]. A number of methods exist, the most common is multiplying the number of dependent variables you measure with a small sample size(this means one of the values are, by coincidence likely to produce a significant result that is down to coincidence).

The reason why P-hacking is done is pretty obvious. Scientific journals want to publish significant results, not replication results or insignificant results. They also want to publish novel research. As a result, there is little funding in replication(cherry picked replication as was done by cigarette companies is one avenue for funding), also because a lot of journals simply do not accept replication papers.

This first problem is called the reproducibility problem. It's such a big problem that over the last decade scientists have been trying to tackle it. The point of reproducing results is because P-Hacking can be done without malicious intent(increasing dependent variables is rarely malicious. Similarly, a small sample size is often a pragmatic problem of not having the means for a larger one). If you cannot reproduce the results, the P-Hacked results aren't in fact significant, as by Occam's razor it is much more likely coincidental.

The second issue that's arising is a case of paper quality. The paper on tortured phrases is an example[3]. This is mainly a problem of padding papers, and being relatively deceptive. This indicative of a poor peer review process regarding scientific journals. Despite this, the solution is self-evident. Improve the peer review process, but with such quantity of papers it's not the easiest. This problem will only get worse as AI(the tortured phrase of AI is Counterfeit Consciousness :^)) advances with regards to creating sensible papers. This will get worse without question due to the presence of GPT-3 produced by OpenAI(deceptively named, as it's not Open at all) being able to produce coherent fictions. Another issue presented is simply citing non-existent scientific papers. I only see a solution where it is tied to some technology like Google Scholar being able to tackle this, but even that would be flawed due to the rate of link rot being dangerously fast for the 21st century(to the point, the internet could be called a dementiated brain of knowledge).

Presented are two major issues in academic science today. It is the reason that a lot of papers aren't replicated, are trash. Those which aren't replicated are dubious until replicated(and even replications can have issues). The presence of external factors muddy the waters. People want to get good degrees, acquire more research funding and do decent science... but all this is at the cost of the integrity of the scientific world.

As a side note, I should mention the scientific cult. These are people who regard an unreplicated study to be fact and set in stone, and will defer to them for the more extreme arguments made. This effects even scientific citation as papers less likely to be true are cited more[4]. In fact, people with an overbearing dependency upon scientific studies that haven't been replicated often have an agenda.

As a result of this, the papers I would typically find more trustworthy are those by engineering companies, as quite often they are written with an intent of setting forward a new technology and convincing people it should be used more widely. An example of this would be the Valve paper on Signed Distance Functions[5]. This is because the claims are much more likely to be refuted and challenged by other engineering companies, and by people involved in the scientific world as these are typically well regarded and well known companies who can make their research well known(The darker side naturally, is when they use this to push an agenda as cigarette companies have done).

=> [1] Nature article making a study with a sample size of 1,500 Scientists.
=> [2] Why most published research findings are false
=> [3] Tortured phrases: A dubious writing style emerging in science. Evidence of critical issues affecting established journals
=> [4] Nonreplicable publications are cited more than replicable ones
=> [5] Improved Alpha-Tested Magnification for Vector Textures and Special Effects

Published on 2022/11/16

Articles from blogs I follow around the net

...

via I'm not really Stanley Lieber. April 20, 2024

Inside the Super Nintendo cartridges

via Fabien Sanglard April 21, 2024

Copyleft licenses are not “restrictive”

One may observe an axis, or a “spectrum”, along which free and open source software licenses can be organized, where one end is “permissive” and the other end is “copyleft”. It is important to acknowledge, however, that though copyleft can be found at the op…

via Drew DeVault's blog April 19, 2024

Generated by openring