risingthumb.xyz Bringing the pain to an already pained web

Academic Science and its problems

Let us begin this by refreshing what is the scientific method. The scientific method is where one makes an observation or question. Follows this with research into the topic. Following this with a hypothesis, and following that with an experiment to test the hypothesis. Data is then analyzed and the conclusions are then reported. The value of the conclusions is based on whether the data and the conclusions made from it can be replicated.

The first issue here is replicability. Somewhere in the region of ~70%[1] of all studies made are simply not replicated. This 70% covers the set of both, attempted replications where the replication doesn't find the same result as well as simply not attempting them. Then there are a lot of studies which simply aren't replicated. This alone is pretty damning. Let us ask then, what is needed for a result to be deemed significant? Statistically, the result must be incredibly unlikely. This is referred to as a P value. However P Values can be hacked by a method called "P-Hacking"[2]. A number of methods exist, the most common is multiplying the number of dependent variables you measure with a small sample size(this means one of the values are, by coincidence likely to produce a significant result that is down to coincidence).

The reason why P-hacking is done is pretty obvious. Scientific journals want to publish significant results, not replication results or insignificant results. They also want to publish novel research. As a result, there is little funding in replication(cherry picked replication as was done by cigarette companies is one avenue for funding), also because a lot of journals simply do not accept replication papers.

This first problem is called the reproducibility problem. It's such a big problem that over the last decade scientists have been trying to tackle it. The point of reproducing results is because P-Hacking can be done without malicious intent(increasing dependent variables is rarely malicious. Similarly, a small sample size is often a pragmatic problem of not having the means for a larger one). If you cannot reproduce the results, the P-Hacked results aren't in fact significant, as by Occam's razor it is much more likely coincidental.

The second issue that's arising is a case of paper quality. The paper on tortured phrases is an example[3]. This is mainly a problem of padding papers, and being relatively deceptive. This indicative of a poor peer review process regarding scientific journals. Despite this, the solution is self-evident. Improve the peer review process, but with such quantity of papers it's not the easiest. This problem will only get worse as AI(the tortured phrase of AI is Counterfeit Consciousness :^)) advances with regards to creating sensible papers. This will get worse without question due to the presence of GPT-3 produced by OpenAI(deceptively named, as it's not Open at all) being able to produce coherent fictions. Another issue presented is simply citing non-existent scientific papers. I only see a solution where it is tied to some technology like Google Scholar being able to tackle this, but even that would be flawed due to the rate of link rot being dangerously fast for the 21st century(to the point, the internet could be called a dementiated brain of knowledge).

Presented are two major issues in academic science today. It is the reason that a lot of papers aren't replicated, are trash. Those which aren't replicated are dubious until replicated(and even replications can have issues). The presence of external factors muddy the waters. People want to get good degrees, acquire more research funding and do decent science... but all this is at the cost of the integrity of the scientific world.

As a side note, I should mention the scientific cult. These are people who regard an unreplicated study to be fact and set in stone, and will defer to them for the more extreme arguments made. This effects even scientific citation as papers less likely to be true are cited more[4]. In fact, people with an overbearing dependency upon scientific studies that haven't been replicated often have an agenda.

As a result of this, the papers I would typically find more trustworthy are those by engineering companies, as quite often they are written with an intent of setting forward a new technology and convincing people it should be used more widely. An example of this would be the Valve paper on Signed Distance Functions[5]. This is because the claims are much more likely to be refuted and challenged by other engineering companies, and by people involved in the scientific world as these are typically well regarded and well known companies who can make their research well known(The darker side naturally, is when they use this to push an agenda as cigarette companies have done).

=> [1] Nature article making a study with a sample size of 1,500 Scientists.
=> [2] Why most published research findings are false
=> [3] Tortured phrases: A dubious writing style emerging in science. Evidence of critical issues affecting established journals
=> [4] Nonreplicable publications are cited more than replicable ones
=> [5] Improved Alpha-Tested Magnification for Vector Textures and Special Effects

To post a comment you need to login first.

Articles from blogs I follow around the net

...

pt. i pt. ii

via I'm not really Stanley Lieber. September 17, 2021

Status update, September 2021

It’s a quiet, foggy morning here in Amsterdam, and here with my fresh mug of coffee and a cuddly cat in my lap, I’d like to share the latest news on my FOSS efforts with you. Grab yourself a warm drink and a cat of your own and let’s get started. First, a new…

via Drew DeVault's blog September 15, 2021

Help Archive Team Archive public Google Drive files before September 13!

On September 13, Google is going to start requiring longer URLs to access many Google Drive files, breaking links to public files across the web unless users opt out! Because…

via Data Horde September 11, 2021

Generated by openring