Deep Learning: An in-depth look at AI-powered Technology | Technology

Deep Learning: An in-depth look at AI-powered Technology | Technology
Deep Learning: An in-depth look at AI-powered Technology 

Deep Learning is helping industries make great strides, but is it revolutionary or just a “useful tool” destined for extinction?

It’s a busy day in 2039 and you’re watching a movie while being transported by one of the countless autonomous vehicles prowling the world’s roads. Brake and speed up when necessary, avoid crashing into things (other cars, cyclists, stray cats), obey all traffic signals, and always stay within the lane markers. 

Not long ago, such a scenario would have seemed ridiculous. It is now more and more firmly in the realm of the possible. In fact, autonomous vehicles may one day become so aware of their surroundings that accidents will be virtually non-existent. However, getting to that point requires overcoming a number of hurdles using a variety of complex processes, including deep learning. But how far can technology take us?

“Deep learning is solving a problem and it’s a useful tool for us,” says Xiang Ma, a machine learning expert and veteran research manager at HERE Technologies in Chicago, which works on developing sophisticated navigation systems for autonomous vehicles. “And we know it’s working. But it could just be a stopgap technology. We don’t know what’s next.”

Deep learning is a form of machine learning that is a subset of artificial intelligence.

READ MORE: InWorld introduces impressive AI-based character generation and interaction | Technology

What is Deep Learning?

A recently reinvigorated form of machine learning, itself a subset of artificial intelligence, deep learning employs powerful computers, massive data sets, “supervised” (trained) neural networks, and an algorithm called backprop for short. to recognize objects and translate speech in real time by mimicking the layers of neurons in the neocortex of the human brain.

READ MORE: A Virtual Reality Business is testing a chronic pain therapy on a simulated patient group | Technology

Deep Learning: A quick explanation

Deep learning (sometimes known as deep structured learning) is a subset of machine learning, where machines use artificial neural networks to process information. Inspired by biological nodes in the human body, deep learning helps computers quickly recognize and process images and speech. Computers then “learn” what these images or sounds represent and build a huge database of stored knowledge for future tasks. In essence, deep learning allows computers to do what humans do naturally: learn by immersion and example.

Deep learning has been around since the 1950s, but its rise to stardom in the field of artificial intelligence is relatively recent. In 1986, pioneering computer scientist Geoffrey Hinton, now a Google researcher and long known as the “godfather of deep learning,” was one of several researchers who helped make neural networks great again, scientifically speaking, by show that more than a few of them could be trained using backpropagation to improve shape recognition and word prediction. By 2012, deep learning was being used in everything from consumer apps like Apple’s Siri to pharmaceutical research.

“The point about this approach is that it scales beautifully,” Hinton told the New York Times. “Basically, you just need to keep making it bigger and faster, and it will get better. There is no turning back”.

READ MORE: How to raise poultry with Virtual Reality | Technology

How Deep Learning works: Building the next-generation autonomous car

Ma’s team at HERE creates high-definition maps that greatly enhance a vehicle’s perceptual capabilities and build the navigation system for future travel. Deep learning is crucial to that process.

“Some people say we can live without HD maps, that we can just put a camera in the car,” Ma says after proudly leading a tour of his office space and showing off the company’s futuristic coffee machine (try the coffee with milk!). “But no matter how good your camera is, you will always have a case of failure. No matter how good your algorithm is, you are always missing something. So in the event your sensors are broken, the map is your last resort.”

Through its development of automated algorithms and a scalable data pipeline (a system, in this case, tens of thousands of cloud-based computers running parallel algorithms, that can handle an ever-increasing amount of input data without leaving to work as expected, without crashes or slowdowns), he and his team build high-definition maps using various data sources selected by sensors in the cars of the main owners of HERE, BMW, Mercedes and Audi.

READ MORE: Two AI growth stocks, up 101% to 339%, according to Wall Street | Technology and Business 

Because it’s important to start with accurate training data, Ma explains, human tagging is a crucial first step in the process. Images from Street View and Lidar (a radar-like detection system that uses laser light to collect precise 3D shapes of the world) combined with lane marking information that is initially manually encoded is fed to a deep learning engine that it is processed repeatedly (“iteratively”, goes the jargon) improved and retrained. The deep learning model is then “deployed” (used in, more jargon) into a production pipeline to automatically detect lane markings down to the centimeter. Humans enter the equation again to verify that all measurements are correct.

Ideally, says Ma, many more vehicle manufacturers will agree to share their sensor data (but not personally identifiable information that could raise privacy issues) so autonomous vehicles can learn from each other and benefit from live updates. For example, let’s say there is a six-car crash five miles ahead of your current location. The first vehicle to detect that chain crash would relay the information to a central source, HERE, which would in turn relay it to all vehicles approaching the crash site. It is still a work in progress, but Ma is confident in its effectiveness.

DeepMind’s “deep reinforcement learning” led to the development of software called AlphaGo and its more advanced sibling AlphaGo Zero, both of which easily defeated human world champions in the ancient Chinese game of Go in 2016. 

READ MORE: Meter’s latest AI discovers stronger, greener concrete formulas | Technology

The scope and impact of Deep Learning: Revolutionary or just a useful tool?

In a 2012 New Yorker article, New York University machine learning professor and researcher Gary Marcus expressed his reluctance to hail deep learning as some sort of revolution in artificial intelligence. While it was “important work, with immediate practical applications”, it nonetheless represented “only a small step towards creating truly intelligent machines”.

“Realistically, deep learning is only part of the larger challenge of building intelligent machines,” Marcus explained. “Such techniques lack ways to represent causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas such as ‘sister’ or ‘same as’. They have no obvious ways of making logical inferences, and they are still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are normally used.”

It was like Thor’s golden hammer. Where it is applied, it is much more effective. In some cases, certain apps can turn the dial up several notches for superhuman performance.

Now, six years later, deep learning is more sophisticated thanks in large part to increased computing power (graph processing, in particular) and algorithmic advances (particularly in the field of neural networks). It is also applied more widely: in medical diagnostics, voice search, automatic text generation, and even weather forecasting, to name a few areas.

READ MORE: Coding Vs Programming: What’s the difference, and Which is Easier To Learn | Education 

“The power of this was really when people discovered that you could apply deep learning to a lot of problems,” says Guild AI founder Garrett Smith, who also runs machine learning organization Chicago ML. “It was like Thor’s golden hammer. Where it is applied, it is much more effective. In some cases, certain apps can turn the dial up several notches for superhuman performance.”

Google’s nine-year-old DeepMind, which it acquired in 2014, is a leader in that space, and its goal is nothing less than “solving intelligence” by merging machine learning and neuroscience. The UK-based company’s studies of “deep reinforcement learning,” a combination of deep learning based on neural networks and reinforcement learning based on trial and error, led to the development of software called AlphaGo and its smaller sibling. advanced AlphaGo Zero, both of which easily beat human world champions in the ancient Chinese game of Go in 2016.

Recently, DeepMind and the non-profit AI research organization OpenAI joined forces to publish an article titled “Reward learning of human preferences and demos at Atari.” His thesis in a nutshell: Human communication of goals works better than a reward system when it comes to solving complex, real-world problems. In that regard, the companies used a combination of human feedback and a process called “reinforcement learning optimization” to achieve superhuman performance in two of the games and better-than-baseline results in the other seven. Which means, essentially, that the machines learned to play video games by watching people play video games and, in two cases, vastly outperformed their human counterparts. And we can’t forget Facebook, whose AI and social media giant came up with the technology behind DeepFace, which can identify faces in photos with human precision.

READ MORE:  Intel provides AI with in-depth education on DIA imaging cardiac ultrasound analysis techniques | Technology

Science has also continued to benefit from (deep) machine learning.

“There is now a more organic connection between machine learning and science,” says University of Chicago computer science professor Risi Kondor. “It’s not just an annoying thing you have to do on the side because there’s too much data on your hard drive; It really is becoming an integral part of the scientific discovery process.”

Molecular dynamics, which involves simulating the trajectories of atoms and molecules to learn how a solid or biological system behaves under stress, or how drug molecules bind to their receptors, is an example. Before long, experts predict, molecular design will be fully automated, accelerating drug development.

READ MORE: 11 Robotics Applications in Banking and Finance | Technology 

The promise of neural networks

The biggest recent change in deep learning is the depth of neural networks, which have gone from a few layers to hundreds of them. More depth means a greater ability to recognize patterns, which increases object recognition and natural language processing. The former has more far-reaching ramifications than the latter.

“Translation [of languages] is a big deal, and there have been amazing applications,” says Smith. “You have more flexibility in terms of networks being able to make predictions in languages ​​that they have never seen before. But there is a limit to the general applicability of the translation. We’ve been doing it for a while, and it’s not a huge game changer. The vision thing is what’s really driving a lot of remarkable innovation. Putting a detector in a car so it can accurately judge its surroundings, that changes everything.”

READ MORE: Artificial Intelligence in the Future of Sports | Technology and Sports

Challenges in Deep Learning: How do we solve the data problem?

The “dazzling elephant in the room” when it comes to deep learning, says Smith, is the data, not so much quality as quantity, as deep/machine learning models are “remarkably resistant to lousy data.”

“You’re basically representing knowledge, the ability to do complex processing,” she says. “To do that, you need more neurons and more capacity. You need data that is not usually available.”

Which means pre-trained models (created by someone else and readily available online) and public datasets (ditto) won’t cut it. That’s where the billionaire giants of machine learning have a distinct advantage.

READ MORE: The technology that Meta and Apple are pursuing: Smart Lenses with Augmented Reality | Technology

One-Shot Learning and NAS: A Powerful Combination

However, a couple of potentially transformative developments could serve to make deep learning faster and more equitable. A crucial one is the ability of models to learn using much less data, also known as “one time learning”, which is still in its infancy.

That’s huge, says Smith.

There is also “neural architecture search” (NAS), whereby an algorithm finds the best neural network architecture. Combining it with one-time learning makes for a powerful pairing.

READ MORE: In 2022, the most important trends in AI and Machine Learning will alter the timeline | Technology

GAN: The cat and mouse approach

In the quest to surpass deep learning, there has also been a lot of buzz around generative adversarial networks. Typically known as GANs, they consist of two neural networks (a “generator” network that creates new data and a “discriminator” network that decides if that data is authentic) that are pitted against each other to affect something akin to artificial imagination. As one explanation says:

“You can think of a GAN as a combination of a counterfeiter and a policeman in a game of cat and mouse, where the counterfeiter learns to pass counterfeit bills and the policeman learns to spot them. Both are dynamic; that is, the police are also training (perhaps the central bank is marking the bills that slipped away), and each side comes to learn the other’s methods in a constant escalation.

READ MORE: How to utilise AI to help you promote your product | Technology 

AutoML: A new form of Deep Learning?

Then there is the application of machine learning to machine learning. Called AutoML, it is based on a “learn-to-learn” instruction that prompts computers, through a learning process, to design innovative architectures (rules and methods) on their own.

“Many people are calling AutoML the new way to do deep learning, a system-wide change,” explains a recent essay at “Instead of designing complex deep networks, we will simply run a preset NAS algorithm. This idea of ​​AutoML is just to abstract away all the complex parts of deep learning. All you need is data. Let AutoML do the hard part of network design! Deep learning literally becomes a plug-in tool like any other. Take some data and automatically create a decision function powered by a complex neural network.”

Facebook’s director of artificial intelligence research, Yann LeCunn, and New York University professor and machine learning expert Gary Marcus are at odds.

READ MORE: 3 Challenges to the Universal Adoption of AI | Technology 

The future of Deep Learning: Evolution or Extinction?

But while Kondor, Smith, Ma and others appreciate the progress deep learning has made, they are also realistic (in Ma’s case, indeed fatalistic) about its limits.

“These tasks that deep learning has been really spectacularly powerful at are exactly the kind of tasks that computer scientists have been working on for a long time because they’re well defined and there’s a lot of commercial interest behind them,” says Kondor. “So it’s true that object recognition is completely different than it was 12 years ago, but it’s still just object recognition; they are not exactly high-level deep cognitive tasks. To what extent we are really talking about intelligence is a matter of speculation.”

In a Medium essay published last December titled “The Deepest Problem with Deep Learning,” Gary Marcus offered an updated version of his 2012 New Yorker examination of the topic. In it, he referenced an interview with the computer science professor at the University of Montreal, Yoshua Bengio, who suggested the “need to consider the difficult challenges of AI and not be satisfied with incremental advances in the short term. I’m not saying I want to forget about deep learning. On the contrary, I want to build on it. But we need to be able to extend it to do things like reason, learn causality, and explore the world to learn and acquire information.”

READ MORE: Smart Data, Social Networks and Augmented Reality for the Health Sector Success | Technology

Marcus “agreed pretty much every word” and when he published the scientific paper “Deep Learning: A Critical Appraisal” in January 2018, his thoughts on why problems with deep learning are impeding the development of artificial general intelligence ( AGI) caused a backlash online. In a series of tweets, Thomas G. Dietterich, distinguished professor emeritus at Oregon State University and former president of the Association for the Advancement of Artificial Intelligence, defended deep learning, noting that “GOFAI works better than nothing [Good Old- Fashioned Artificial Intelligence] ever produced.”

Still, even the “Godfather of Deep Learning” has changed his tune, telling Axios that he has become “deeply suspicious” of backward propagation, which underlies deep learning.

“I don’t think that’s how the brain works,” Hinton said. “We [humans] clearly don’t need all the labeled data.”

His solution: “throw it all out and start over.”

However, are such drastic measures really necessary? It may be, as Facebook AI research director Yann LeCunn told VentureBeat, that deep learning should ditch the popular but sometimes problematic Python coding language in favor of one that is simpler and more malleable. On the other hand, LeCun added, new hardware may be needed. In any case, one thing is clear: deep learning must evolve or it risks disappearing. Although achieving the first, experts say, does not exclude the second.

READ MORE: Memory Enhancing Augmented Reality Technology 

The human brain is complex. Deep learning is not.

“Current deep learning is just a data-driven tool,” says HERE’s Ma. “But it’s definitely not self-study yet.”

Not only that, but no one yet knows how many neurons are needed for it to be self-learning. Furthermore, from a biological point of view, relatively little is known about the human brain, certainly not enough to create a system that even comes close to mimicking it. At this point, Ma says, even her three-year-old daughter has deep learning rhythm.

“If I show her a [single] image of an elephant and then she sees a different image of an elephant, she can immediately recognize that it is an elephant because she already knew the shape and can imagine it in her brain. But deep learning would fail on this problem, because it lacks the ability to learn from a few samples. We still rely on massive amounts of training data.”

READ MORE: Virtual Reality and Augmented Reality in Retail | Technology

For now, Ma says, deep learning is simply a useful method that can be implemented to solve various problems. As such, she thinks, her extinction is not imminent. However, it is likely.

“As soon as we know what’s next, we’ll switch to that. Because this is not the definitive solution.”

Source: iArtificial, Direct News 99 

Leave a Comment