The broadening reach of machine learning
Machine learning is the process of feeding data into a program so that it ‘learns’ how to perform a certain task, without engineers having to explicitly program an algorithm.
A few years ago, companies like Google and Microsoft open-sourced their machine learning platforms, allowing anyone to apply machine learning tools to their own projects. This year, we’ve really started to see the fruits of those accessible platforms, with machine learning applied to a wider range of tasks that ever before.
Machine learning is now being used, for instance, to detect the sounds of illegal logging in the rainforest, monitor the health of dairy cows, and detect cancer cells that are too small for pathologists to see.
However, machine learning isn’t without its share of controversies. It recently emerged that Amazon had to scrap an algorithm it was using to sort job applications, because it became biased towards male candidates. The algorithm was trained using employment data from the past 10 years at Amazon, and thus reflected male dominance in the tech industry. Machine learning is a powerful tool, but it’s only as good as the data on which it’s trained.
Machine learning predicts earthquake aftershocks
In August this year, scientists from Harvard and Google used machine learning to predict the locations of earthquake aftershocks. Seismic events involve a lot of variables, making it hard to pick out patterns in the data. This newly developed model was significantly more reliable than any existing prediction technique. It’s still too slow a process to be of much practical use, but the model indicates how beneficial machine learning could be in responding to natural disasters in the future.
Surveillance to save the planet
If you’re living in London, it’s likely you’ll be caught on CCTV cameras around 300 times today. Further afield, more than 1,700 satellites are monitoring cities from space.
Fearing invasions of our privacy, we’ve traditionally distrusted mass surveillance. Yet this network of cameras can also be used to help save vulnerable species and landscapes from eradication and destruction. More than 400 infrared camera traps are monitoring the movement of giant pandas in China, while astronomers and ecologists are combining their surveillance skills to search for orangutans, spider monkeys and river dolphins.
Earlier this year, Bill Gates announced his support for a project called EarthNow. It will involve launching a network of satellites with huge amounts of processing power, capable of streaming continuous real-time images of the Earth. EarthNow’s founder and CEO Russell Hannigan suggests it could be used to track whale migrations, catch illegal fishermen and detect wildfires in their earliest stages.
We’ve been able to create myoelectric prosthetic limbs, where the user controls the limb’s movements using their own muscles, for decades. For a prosthetic hand, it’s usually using the muscles in the arm. But it’s only recently that such technology became cost-effective enough to be a realistic option for most amputees. Largely, that’s thanks to the rise of 3D printing, which allows engineers to produce custom parts quickly and affordably.
UK company Open Bionics creates 3D-printed myoelectric prostheses, and all of its work is open source, allowing other researchers to build on it. Georgia Institute of Technology researchers have integrated ultrasound control into one of the Open Bionics’ arms, which has proved sensitive enough to allow a musician who lost his hand and forearm to play the piano again. Meanwhile, at Newcastle University, researchers have developed a bionic hand with a camera that can photograph objects and trigger movements, allowing the user to grip the objects more effectively.
Electronic skin brings touch feedback to prostheses
Researchers at Johns Hopkins University have created an electronic ‘skin’ that generates a sense of touch through a prosthetic’s fingertips. The ‘e-dermis’ is made of fabric and rubber, and conveys touch and pain information by electrically stimulating peripheral nerves in the user’s arm. The researchers are continuing to develop the technology to provide more meaningful sensory feedback, allowing users to regain more function.
This is an extract from issue 330 of BBC Focus magazine.
Subscribe and get the full article delivered to your door, or download the BBC Focus app to read it on your smartphone or tablet. Find out more
[This article was first published in May 2018]