Scientific reproducibility does not equate to scientific truth, mathematical model finds

According to math model produced by a team from the University of Idaho, Reproducible scientific results are not always true and true scientific results are not always reproducible.

Researchers investigated the relationship between reproducibility and the discovery of scientific truths by building a mathematical model that represents a scientific community working toward finding a scientific truth. In each simulation, the scientists are asked to identify the shape of a specific polygon.

The modeled scientific community included multiple scientist types, each with a different research strategy, such as performing highly innovative experiments or simple replication experiments. Devezer and her colleagues studied whether factors like the makeup of the community, the complexity of the polygon and the rate of reproducibility influenced how fast the community settled on the true polygon shape as the scientific consensus and the persistence of the true polygon shape as the scientific consensus.

Within the model, the rate of reproducibility did not always correlate with the probability of identifying the truth, how fast the community identified the truth and whether the community stuck with the truth once they identified it. These findings indicate reproducible results are not synonymous with finding the truth, Devezer said.

Compared to other research strategies, highly innovative research tactics resulted in a quicker discovery of the truth.

“We found that, within the model, some research strategies that lead to reproducible results could actually slow down the scientific process, meaning reproducibility may not always be the best — or at least the only — indicator of good science,” said Erkan Buzbas, U of I assistant professor in the College of Science, Department of Statistical Science and a co-author on the paper. “Insisting on reproducibility as the only criterion might have undesirable consequences for scientific progress.”

Reference: https://www.sciencedaily.com/releases/2019/05/190515144008.htm

 

Statistical model could predict future disease outbreaks

A team from University of Georgia have teamed up to create a statistical method that may allow public health and infectious disease forecasters to better predict disease re-emergence.

In recent years, the reemergence of measles, mumps, polio, whooping cough and other vaccine-preventable diseases has sparked a refocus on emergency preparedness.

The researchers focused on “critical slowing down,” or the loss of stability that occurs in a system as a tipping point is reached. This slowing down can result from pathogen evolution, changes in contact rates of infected individuals, and declines in vaccination. All these changes may affect the spread of a disease, but they often take place gradually and without much consequence until a tipping point is crossed.

“We saw a need to improve the ways of measuring how well-controlled a disease is, which can be difficult to do in a very complex system, especially when we observe a small fraction of the true number of cases that occur,” said Eamon O’Dea, a postdoctoral researcher in Drake’s laboratory who focuses on disease ecology.

The team created a visualization that looks like a series of bowls with balls rolling in them. In the model, vaccine coverage affects the shallowness of the bowl and the speed of the ball rolling in it.

Very often, the conceptual side of science is not emphasized as much as it should be, and we were pleased to find the right visuals to help others understand the science.

If a computer model of a particular disease was sufficiently detailed and accurate, it would be possible to predict the course of an outbreak using simulation, researchers say.

“But if you don’t have a good model, as is often the case, then the statistics of critical slowing down might still give us early warning of an outbreak.”

Reference: https://www.sciencedaily.com/releases/2019/05/190521124653.htm

Mathematicians revive abandoned approach to Riemann Hypothesis

Over the last 50 years, there’s been many proposals s regarding the Riemann Hypothesis, but none of them have led to conquering the most famous open problem in mathematics. A new paper in the Proceedings of the National Academy of Sciences (PNAS) builds on the work of Johan Jensen and George Pólya, two of the most important mathematicians of the 20th century. It reveals a method to calculate the Jensen-Pólya polynomials — a formulation of the Riemann Hypothesis — not one at a time, but all at once.

Although the paper falls short of proving the Riemann Hypothesis, its consequences include previously open assertions which are known to follow from the Riemann Hypothesis, as well as some proofs of conjectures in other fields.

The idea for the paper was sparked two years ago by a “toy problem” that Ono presented as a “gift” to entertain Zagier during the lead-up to a math conference celebrating his 65th birthday. A toy problem is a scaled-down version of a bigger, more complicated problem that mathematicians are trying to solve.

The hypothesis is a vehicle to understand one of the greatest mysteries in number theory — the pattern underlying prime numbers. Although prime numbers are simple objects defined in elementary math (any number greater than 1 with no positive divisors other than 1 and itself) their distribution remains hidden.

For the PNAS paper, the authors devised a conceptual framework that combines the polynomials by degrees. This method enabled them to confirm the criterion for each degree 100 percent of the time, eclipsing the handful of cases that were previously known.

Despite their work, the results don’t rule out the possibility that the Riemann Hypothesis is false and the authors believe that a complete proof of the famous conjecture is still far off.

Reference: https://www.sciencedaily.com/releases/2019/05/190521162441.htm

 

Better together: human and robot co-workers

A lot of processes are being automated and digitised currently. Self-driving delivery vehicles, such as forklifts, are finding their way into many areas with many companies are reporting potential time and cost savings.

However, an interdisciplinary research team from the universities of Göttingen, Duisburg-Essen and Trier has observed that cooperation between humans and machines can work much better than just human or just robot teams alone. The results were published in the International Journal of Advanced Manufacturing Technologies.

The research team simulated a process from production logistics, such as the typical supply of materials for use in the car or engineering industries. A team of human drivers, a team of robots and a mixed team of humans and robots were assigned transport tasks using vehicles. The time they needed was measured. The results were that the mixed team of humans and robots were able to beat the other teams; this coordination of processes was most efficient and caused the fewest accidents. This was quite unexpected, as the highest levels of efficiency are often assumed to belong to those systems that are completely automated.

“This brings a crucial ray of hope when considering efficiency in all discussions involving automation and digitisation,” says the first author of the study, Professor Matthias Klumpp from the University of Göttingen.

The researchers from the various disciplines of business administration, computer science and sociology of work and industry highlighted the requirements for successful human-machine interaction. In many corporate and business situations, decisions will continue to be driven by people.

In conclusion, researchers say that companies should pay more attention to their employees in the technical implementation of automation.

Reference: https://www.sciencedaily.com/releases/2019/05/190524113529.htm

 

Exploring the Mathematical Universe

A team of mathematicians from 12 countries has begun charting the terrain of rich, new mathematical worlds. The mathematical universe is filled with both familiar and exotic items. The “L-functions and Modular Forms Database,” abbreviated LMFDB—a sophisticated web interface that allows both experts and amateurs to easily navigate its contents.

According to Benedict Gross, an emeritus professor of mathematics at Harvard University, “Number theory is a subject that is as old as written history itself. Throughout its development, numerical computations have proved critical to discoveries, including the prime number theorem, and more recently, the conjecture of Birch and Swinnerton-Dyer on elliptic curves. The LMFDB pulls together all of the amazing computations that have been done with these objects.

Prime numbers have fascinated mathematicians throughout the ages. The distribution of primes is believed to be random, but proving this remains beyond the grasp of mathematicians to date. Under the Riemann hypothesis, the distribution of primes is intimately related to the Riemann zeta function, which is the simplest example of an L-­function. The LMFDB contains more than twenty million L-­functions, each of which has an analogous Riemann hypothesis that is believed to govern the distribution of wide range of more exotic mathematical objects. Patterns found in the study of these L-­functions also arise in complex quantum systems, and there is a conjectured to be direct connection to quantum physics.

A recent contribution by Andrew Sutherland at MIT used 72,000 cores of Google’s Compute Engine to complete in one weekend a tabulation that would have taken more than a century on a single computer. The application of large-scale cloud computing to research in pure mathematics is just one of the ways in which the project is pushing forward the frontier of mathematics.

Reference: https://www.sciencedaily.com/releases/2016/05/160510084152.htm

 

Supercomputing For a Superproblem: A Computational Journey into Pure Mathematics

One of the most reputable and respected mathematician known to have solved one of the subject’s most challenging problems has published his latest work as a University of Leicester research report.

This follows the visit that famed mathematician Yuri Matiyasevich made to the Department of Mathematics where he talked about his pioneering work. He visited UK by invitation of the Isaac Newton Institute for Mathematical Sciences.

In 1900, twenty-three unsolved mathematical problems, known as Hilbert’s Problems, were compiled as a definitive list by mathematician David Hilbert.

A century later, the seven most important unsolved mathematical problems to date, known as the ‘Millennium Problems’, were listed by the Clay Mathematics Institute. Solving one of these Millennium Problems has a reward of US $1,000,000, and so far only one has been resolved, namely the famous Poincare Conjecture, which only recently was verified by G. Perelman.

Yuri Matiyasevich found a negative solution to one of Hilbert’s problems. Now, he’s working on the more challenging of maths problems — and the only one that appears on both lists — Riemann’s zeta function hypothesis.

Professor Alexander Gorban, from the University of Leicester, said: “His visit was a great event for our mathematics and computer science departments.

“Matiyasevich has now published a paper through the University that regards the zeros of Riemann Zeta Function (RZF). This is a mathematical function which has been studied for over a hundred years.

“There is previous evidence of famous pure mathematical problems using massive computations. Unfortunately, the Riemann hypothesis is not reduced to a finite problem and, therefore, the computations can disprove but cannot prove it. Computations here provide the tools for guessing and disproving the guesses only.”

Reference: https://www.sciencedaily.com/releases/2012/11/121106125558.htm

 

Computers Unlock More Secrets of the Mysterious Indus Valley Script

Lots of artefacts left by an urban civilization living on what is now the border between Pakistan and India, have been discovered. Now a team of Indian and American researchers are using mathematics and computer science to try to piece together information about the still-unknown script.

The team used computers to extract patterns in ancient Indus symbols. The study shows distinct patterns in the symbols’ placement in sequences and creates a statistical model for the unknown language.

Despite dozens of attempts, nobody has yet interpreted the Indus script. The symbols are found on tiny seals, tablets and amulets, left by people inhabiting the Indus Valley from about 2600 to 1900 B.C. Each artefact is inscribed with a sequence that is typically five to six symbols long.

The new study shows that the order of symbols is meaningful; taking one symbol from a sequence found on an artefact and changing its position produces a new sequence that has a much lower probability of belonging to the hypothetical language.

Seals with sequences of Indus symbols have been found as far away as West Asia, specifically Mesopotamia and site of modern-day Iraq. The statistical results showed that the West-Asian sequences are ordered differently from sequences on artifacts found in the Indus valley. This supports earlier theories that the script may have been used by Indus traders in West Asia to represent different information compared to the Indus region.

They used a Markov model, a statistical method that estimates the likelihood of a future event based on past patterns.

One application described in the paper uses the statistical model to fill in missing symbols on damaged archaeological artifacts. Such filled-in texts can increase the pool of data available for deciphering the writings of ancient civilizations.

Reference: https://www.sciencedaily.com/releases/2009/08/090803185836.htm

 

Computer Scientist Reveals the Math and Science behind Blockbuster Movies

It’s clear that the computer-generated special effects in Pirates of the Caribbean and others breathe life to such fantasies. Amazingly, the amount of math and science behind such blockbusters baffles even the adept scientist.

Computer graphics (CG) experts used to have to make a Catch-22 decision. They could run inferior algorithms on many processors or run the best algorithm on only one processor. The problem is that many algorithms do not scale well to larger numbers of processors. But about a year and a half ago Fedkiw, who has consulted for ILM for six years, figured out how to run a star algorithm on many processors, resulting in special effects unprecedented in their realism.

He designs new algorithms for diverse applications such as computational fluid dynamics and solid mechanics, computer graphics, computer vision and computational biomechanics. The algorithms may rotate objects, simulate textures, generate reflections or mimic collisions. Or they may mathematically stitch together slices of a falling water drop, rising smoke wisp or flickering flame to weave realism into CG images.

Fedkiw received screen credits for his work on Poseidon, on Terminator 3: Rise of the Machines for the liquid terminator and the nuclear explosions, and on Star Wars: Episode III—Revenge of the Sith for explosions in space battle scenes.

Most of Fedkiw’s students double-major in math and computer science. “Graphics itself is a bit less important, and many of them don’t take their first graphics class until their junior or senior year of college.

Fedkiw’s favorite movie employing CG is Revenge of the Sith. “When I watched the first [Star Wars film] at 9 years old, I never dreamed that I’d eventually be helping to make the last one.” He says.

Reference: https://news.stanford.edu/news/2007/april4/fed-040407.html

 

Reaching 99.999999999997 Percent Safety: Computer Scientists Present Their Concept for a Wireless Bicycle Brake

Computer scientists at Saarland University have developed a wireless bicycle brake and demonstrated its efficiency on a so-called cruiser bike. They further confirmed the brake system’s reliability through mathematical calculations that are also used in control systems for aircraft or chemical factories.

To brake with the wireless brake, a cyclist has just to clench the rubber grip on the right handle. It seems as if a ghost hand is in play, but a combination of several electronic components enables the braking. Integrated in the rubber grip is a pressure sensor, which activates a sender if a specified pressure threshold is crossed. The sender is integrated in a blue plastic box which is the size of a cigarette packet and is attached to the handlebar. Its radio signals are sent to a receiver attached at the end of the bicycle’s fork. The receiver forwards the signal to an actuator, transforming the radio signal into the mechanical power by which the disk brake is activated.

To enhance reliability, there are additional senders attached to the bicycle. These repeatedly send the same signal. In this way, the scientists hope to ensure that the signal arrives at the receiver in time, even if the connection causes a delay or fails. The computer scientists at Saarland University found that increasing the number of senders does not result in increased reliability.

After first talks with bicycle brake manufacturers, Hermanns is looking for engineers who will realize the concept of a wireless bicycle brake.

Reference: https://www.sciencedaily.com/releases/2011/10/111013085105.htm

 

Computer Scientists Develop ‘Mathematical Jigsaw Puzzles’ To Encrypt Software

A team of researchers have designed a system to encrypt software so that it only allows someone to use a program as intended while preventing any deciphering of the code behind it. This is known as “software obfuscation,” and it is the first time it has been accomplished.

Sahai, a science professor who specializes in cryptography at UCLA’s Henry Samueli School of Engineering and Applied Science, previously developed techniques for obfuscation presented only a “speed bump,” forcing an attacker to spend some effort, perhaps a few days, trying to reverse-engineer the software. The new system, he said, puts up an “iron wall,” making it impossible for an adversary to reverse-engineer the software without solving mathematical problems that take hundreds of years to work out on today’s computers — a game-change in the field of cryptography.

The researchers said their mathematical obfuscation mechanism can be used to protect intellectual property by preventing the theft of new algorithms and by hiding the vulnerability a software patch is designed to repair when the patch is distributed.

The key to this successful obfuscation mechanism is a new type of “multilinear jigsaw puzzle.” Through this mechanism, attempts to find out why and how the software works will be thwarted

The new technique for software obfuscation led to the emergence of functional encryption. With functional encryption, instead of sending an encrypted message, an encrypted function is sent in its place. This offers a much more secure way to protect information, Sahai said. Previous work on functional encryption was limited to supporting very few functions; the new work can handle any computable function.

“Through functional encryption, you only get the specific answer, you don’t learn anything else,” Sahai said.

Reference: https://www.sciencedaily.com/releases/2013/07/130729161946.htm