UNIVERSITY PARK, Pa. — The world produces about 2.5 quintillion bytes of data every day. Storing and transferring all of this enormous — and constantly growing — number of images, videos, Tweets, and other forms of data is becoming a significant challenge, one that threatens to undermine the growth of the internet and thwart the introduction of new technologies, such as the internet of things.
Now, a team of researchers reports that an algorithm that uses a machine learning technique based on the human brain could ease that data clog by reducing the size of multimedia files, such as videos and images, and restoring them without losing much quality or information. Machine learning is a type of artificial intelligence, or AI.
In a study, the researchers developed an algorithm that features a recurrent neural network to compress and restore data, according to C. Lee Giles, David Reese Professor of Information Sciences and Technology, Penn State, and an Institute for CyberScience associate. In this case, the algorithm, which they called the iterative refinement algorithm, which focuses on the decoding or restoring step, was able to produce restored images that had better quality than the benchmarks selected for the study, including a compression system designed by Google, which the researchers considered the best at the time.
People compress data to store more photos on their smartphone, for example, or share videos across the Internet or over social media platforms such as YouTube and Twitter.
He said that the system’s success in compressing files is due to the use of a recurrent neural network decoder, rather than a feedforward network or a conventional (linear) decoder. A recurrent neural network uses stateful memory, which allows it to store pieces of data as it makes calculations. However, a regular neural network — or feedforward neural network — cannot store data and can only process information sequentially. With the added memory capacity, recurrent neural networks can perform better at tasks, such as image recognition.
"A recurrent system has feedback, while a multilayered perceptron, or convolutional net, or other similar type of neural network, are usually feedforward, in other words, the data just goes through, it’s not stored as memory," Giles said.
David Miller, professor of electrical engineering and computer science, who worked with Giles, said that “the key advantage of recurrence in this image decoding context is that it exploits correlations over long spatial regions of the image than a conventional image decoder.”
Another advantage of the algorithm, compared to competing systems, was the simplicity of the algorithm’s design, said the researchers, who reported their findings recently at the Data Compression Conference (DCC).
“We really just have the recurrent neural network at the end of the process, compared to Google’s, which does include recurrent neural networks, but they’re placed at a lot of different layers, which adds to the complexity,” said Giles.
One of the problems with compression is that when a compressed image or video is restored, the file might lose bits of information, which may make the image or video blurry, or distorted. The researchers tested the algorithm on several images and it was able to store and reconstruct the images at higher quality than Google’s algorithm and other benchmark systems.
Neural networks arrange their electronic “neurons” much like the way the brain is composed of networks of neurons; however, Alexander G. Ororbia, an assistant professor at Rochester Institute of Technology, whose research focuses on developing biologically motivated neural systems, and learning algorithms lead on this research, said electronic brains are far simpler.
"The important thing to remember is that these neural networks are loosely based on the brain," said Ororbia. "The neurons that make up an electronic neural network are much, much simpler. Real biological neurons are extremely complex. Some people say that the electronic neural network is almost a caricature of the brain's neural network."
Giles said that the idea to use recurrent neural networks for compression came from revisiting old neural network research on the compression problem,.
"We noticed there was not much on using neural network for compression — and we wondered why," said Giles. "It's always good to revisit old work to see something that might be applicable today."
The researchers tested their algorithm’s ability to compress and restore an image in comparison with Google's system using three independent metrics that evaluate image quality: Peak Signal Noise Ratio, Structural Similarity Image Index and Multi-Scale Structural Similarity Image Index — that evaluate image quality.
“The results from all of the independent benchmarks and test-sets and for all of the metrics, show that the proposed iterative refinement algorithm produced images with lower distortion and higher perceptual quality,” said Ankur Mali, a doctoral candidate student at Penn State, who worked extensively on the technical implementation of the system.
In the future, the researchers may also explore whether the system is easier to train than competing algorithms.
While all the compression neural networks require training — feeding data into the system to teach it how to perform — Giles thinks the team’s design may be easier to train.
“I would guess it’s much, much faster, in terms of training, too,” said Giles.
The team also included Jian Wu, assistant professor of computer science, Old Dominion University; and William Dreese, undergraduate student in computer science, and Scott O’Connell, a recently-graduated undergraduate student in mathematics, both at Penn State.