Scientific reproducibility does not equate to scientific truth, mathematical model finds

According to math model produced by a team from the University of Idaho, Reproducible scientific results are not always true and true scientific results are not always reproducible.

Researchers investigated the relationship between reproducibility and the discovery of scientific truths by building a mathematical model that represents a scientific community working toward finding a scientific truth. In each simulation, the scientists are asked to identify the shape of a specific polygon.

The modeled scientific community included multiple scientist types, each with a different research strategy, such as performing highly innovative experiments or simple replication experiments. Devezer and her colleagues studied whether factors like the makeup of the community, the complexity of the polygon and the rate of reproducibility influenced how fast the community settled on the true polygon shape as the scientific consensus and the persistence of the true polygon shape as the scientific consensus.

Within the model, the rate of reproducibility did not always correlate with the probability of identifying the truth, how fast the community identified the truth and whether the community stuck with the truth once they identified it. These findings indicate reproducible results are not synonymous with finding the truth, Devezer said.

Compared to other research strategies, highly innovative research tactics resulted in a quicker discovery of the truth.

“We found that, within the model, some research strategies that lead to reproducible results could actually slow down the scientific process, meaning reproducibility may not always be the best — or at least the only — indicator of good science,” said Erkan Buzbas, U of I assistant professor in the College of Science, Department of Statistical Science and a co-author on the paper. “Insisting on reproducibility as the only criterion might have undesirable consequences for scientific progress.”

Reference: https://www.sciencedaily.com/releases/2019/05/190515144008.htm

 

Advertisements

Statistical model could predict future disease outbreaks

A team from University of Georgia have teamed up to create a statistical method that may allow public health and infectious disease forecasters to better predict disease re-emergence.

In recent years, the reemergence of measles, mumps, polio, whooping cough and other vaccine-preventable diseases has sparked a refocus on emergency preparedness.

The researchers focused on “critical slowing down,” or the loss of stability that occurs in a system as a tipping point is reached. This slowing down can result from pathogen evolution, changes in contact rates of infected individuals, and declines in vaccination. All these changes may affect the spread of a disease, but they often take place gradually and without much consequence until a tipping point is crossed.

“We saw a need to improve the ways of measuring how well-controlled a disease is, which can be difficult to do in a very complex system, especially when we observe a small fraction of the true number of cases that occur,” said Eamon O’Dea, a postdoctoral researcher in Drake’s laboratory who focuses on disease ecology.

The team created a visualization that looks like a series of bowls with balls rolling in them. In the model, vaccine coverage affects the shallowness of the bowl and the speed of the ball rolling in it.

Very often, the conceptual side of science is not emphasized as much as it should be, and we were pleased to find the right visuals to help others understand the science.

If a computer model of a particular disease was sufficiently detailed and accurate, it would be possible to predict the course of an outbreak using simulation, researchers say.

“But if you don’t have a good model, as is often the case, then the statistics of critical slowing down might still give us early warning of an outbreak.”

Reference: https://www.sciencedaily.com/releases/2019/05/190521124653.htm

Mathematicians revive abandoned approach to Riemann Hypothesis

Over the last 50 years, there’s been many proposals s regarding the Riemann Hypothesis, but none of them have led to conquering the most famous open problem in mathematics. A new paper in the Proceedings of the National Academy of Sciences (PNAS) builds on the work of Johan Jensen and George Pólya, two of the most important mathematicians of the 20th century. It reveals a method to calculate the Jensen-Pólya polynomials — a formulation of the Riemann Hypothesis — not one at a time, but all at once.

Although the paper falls short of proving the Riemann Hypothesis, its consequences include previously open assertions which are known to follow from the Riemann Hypothesis, as well as some proofs of conjectures in other fields.

The idea for the paper was sparked two years ago by a “toy problem” that Ono presented as a “gift” to entertain Zagier during the lead-up to a math conference celebrating his 65th birthday. A toy problem is a scaled-down version of a bigger, more complicated problem that mathematicians are trying to solve.

The hypothesis is a vehicle to understand one of the greatest mysteries in number theory — the pattern underlying prime numbers. Although prime numbers are simple objects defined in elementary math (any number greater than 1 with no positive divisors other than 1 and itself) their distribution remains hidden.

For the PNAS paper, the authors devised a conceptual framework that combines the polynomials by degrees. This method enabled them to confirm the criterion for each degree 100 percent of the time, eclipsing the handful of cases that were previously known.

Despite their work, the results don’t rule out the possibility that the Riemann Hypothesis is false and the authors believe that a complete proof of the famous conjecture is still far off.

Reference: https://www.sciencedaily.com/releases/2019/05/190521162441.htm

 

Better together: human and robot co-workers

A lot of processes are being automated and digitised currently. Self-driving delivery vehicles, such as forklifts, are finding their way into many areas with many companies are reporting potential time and cost savings.

However, an interdisciplinary research team from the universities of Göttingen, Duisburg-Essen and Trier has observed that cooperation between humans and machines can work much better than just human or just robot teams alone. The results were published in the International Journal of Advanced Manufacturing Technologies.

The research team simulated a process from production logistics, such as the typical supply of materials for use in the car or engineering industries. A team of human drivers, a team of robots and a mixed team of humans and robots were assigned transport tasks using vehicles. The time they needed was measured. The results were that the mixed team of humans and robots were able to beat the other teams; this coordination of processes was most efficient and caused the fewest accidents. This was quite unexpected, as the highest levels of efficiency are often assumed to belong to those systems that are completely automated.

“This brings a crucial ray of hope when considering efficiency in all discussions involving automation and digitisation,” says the first author of the study, Professor Matthias Klumpp from the University of Göttingen.

The researchers from the various disciplines of business administration, computer science and sociology of work and industry highlighted the requirements for successful human-machine interaction. In many corporate and business situations, decisions will continue to be driven by people.

In conclusion, researchers say that companies should pay more attention to their employees in the technical implementation of automation.

Reference: https://www.sciencedaily.com/releases/2019/05/190524113529.htm