Reaching 99.999999999997 Percent Safety: Computer Scientists Present Their Concept for a Wireless Bicycle Brake

Computer scientists at Saarland University have developed a wireless bicycle brake and demonstrated its efficiency on a so-called cruiser bike. They further confirmed the brake system’s reliability through mathematical calculations that are also used in control systems for aircraft or chemical factories.

To brake with the wireless brake, a cyclist has just to clench the rubber grip on the right handle. It seems as if a ghost hand is in play, but a combination of several electronic components enables the braking. Integrated in the rubber grip is a pressure sensor, which activates a sender if a specified pressure threshold is crossed. The sender is integrated in a blue plastic box which is the size of a cigarette packet and is attached to the handlebar. Its radio signals are sent to a receiver attached at the end of the bicycle’s fork. The receiver forwards the signal to an actuator, transforming the radio signal into the mechanical power by which the disk brake is activated.

To enhance reliability, there are additional senders attached to the bicycle. These repeatedly send the same signal. In this way, the scientists hope to ensure that the signal arrives at the receiver in time, even if the connection causes a delay or fails. The computer scientists at Saarland University found that increasing the number of senders does not result in increased reliability.

After first talks with bicycle brake manufacturers, Hermanns is looking for engineers who will realize the concept of a wireless bicycle brake.

Reference: https://www.sciencedaily.com/releases/2011/10/111013085105.htm

 

Advertisements

Computer Scientists Develop ‘Mathematical Jigsaw Puzzles’ To Encrypt Software

A team of researchers have designed a system to encrypt software so that it only allows someone to use a program as intended while preventing any deciphering of the code behind it. This is known as “software obfuscation,” and it is the first time it has been accomplished.

Sahai, a science professor who specializes in cryptography at UCLA’s Henry Samueli School of Engineering and Applied Science, previously developed techniques for obfuscation presented only a “speed bump,” forcing an attacker to spend some effort, perhaps a few days, trying to reverse-engineer the software. The new system, he said, puts up an “iron wall,” making it impossible for an adversary to reverse-engineer the software without solving mathematical problems that take hundreds of years to work out on today’s computers — a game-change in the field of cryptography.

The researchers said their mathematical obfuscation mechanism can be used to protect intellectual property by preventing the theft of new algorithms and by hiding the vulnerability a software patch is designed to repair when the patch is distributed.

The key to this successful obfuscation mechanism is a new type of “multilinear jigsaw puzzle.” Through this mechanism, attempts to find out why and how the software works will be thwarted

The new technique for software obfuscation led to the emergence of functional encryption. With functional encryption, instead of sending an encrypted message, an encrypted function is sent in its place. This offers a much more secure way to protect information, Sahai said. Previous work on functional encryption was limited to supporting very few functions; the new work can handle any computable function.

“Through functional encryption, you only get the specific answer, you don’t learn anything else,” Sahai said.

Reference: https://www.sciencedaily.com/releases/2013/07/130729161946.htm

 

 

Brain-Like Computers Moving Closer to Cracking Codes

A new discovery has been made by the U.S. Army Research Laboratory scientists about brain-like computer architectures for an age-old number-theoretic problem known as integer factorization.

This takes away the traditional computing architectures and embracing devices that are able to operate within extreme size-, weight-, and power-constrained environments. These devices can process information and solve computationally-hard problems quicker.

Simply the problem can be sated as: take a composite integer N and express it as the product of its prime components. For example, 100 can be 10×10 or 5x5x4.  What many didn’t realize is they were performing a task that if completed quickly enough for large numbers, could break much of the modern day internet.

The security of the RSA algorithm relies on the difficulty of factoring a large composite integer N, the public key, which is distributed by the receiver to anyone who wants to send an encrypted message. If N can be factored into its prime components, then the private key, can be recovered. However, the difficulty in factoring large integers quickly becomes apparent. This difficulty underlies the security of the RSA algorithm.

The scientists demonstrated how brain-like computers lend a speedup to the currently best known algorithms for factoring integers. So they devised a way to factor large composite integers by harnessing the massive parallelism of novel computer architectures that mimic the functioning of the mammalian brain.

As emerging devices shift to integrate massive parallelism and harness material physics to computer, the computational hardness underlying some security protocols may still be challenged. This study opens the door to new research areas of emerging computer architectures, in terms of algorithm design and function representation, alongside low-power machine learning and artificial intelligence applications.

Reference: https://www.sciencedaily.com/releases/2018/03/180321174001.htm

 

Mathematical Solver for Analog Computers

Your computer performs most tasks well, but, with its style of mathematics that relies on the binary code system of “on” and “off” 1s and 0s, isn’t ideal for solving every problem.

That’s why researchers are interested in reviving analog computing at a time when digital computing has reached its maximum potential.

Zoltán Toroczkai, professor in the Department of Physics and concurrent professor in the Department of Computer Science and Engineering at the University of Notre Dame, and collaborators have been working toward developing a novel mathematical approach that can potentially find the best solution to NP-hard problems.

Analog computers were used to predict tides from the early to mid-20th century, guide weapons on battleships and launch NASA’s first rockets into space, among other uses. However, analog computers were cumbersome and prone to “noise” — disturbances in the signals — and were difficult to re-configure to solve different problems, so they fell out of favor.

Digital computers emerged after transistors and integrated circuits were reliably mass produced, and for many tasks they are accurate and sufficiently flexible.

A challenge for analog computing rests with the design of continuous algorithms. Unlike digital computing, which has a long history in algorithm development, algorithms for analog computers lack a similar knowledge base and thus are very difficult to design.

The next step is to design and build devices that would be built for specific tasks, and not for everyday computing needs. However, there are engineering problems that need to be solved at this point, such as spurious capacities and better noise control, but it’s going to get there.

Reference: https://www.sciencedaily.com/releases/2018/12/181212160058.htm