Echo Dot (3rd Gen) - Smart speaker with Alexa - Charcoal

Use your voice to play a song, artist, or genre through Amazon Music, Apple Music, Spotify, Pandora, and others. With compatible Echo devices in different rooms, you can fill your whole home with music.

Buy Now

Wireless Rechargeable Battery Powered WiFi Camera.

Wireless Rechargeable Battery Powered WiFi Camera is home security camera system lets you listen in and talk back through the built in speaker and microphone that work directly through your iPhone or Android Mic.

Buy Now

NLP in 2020; Modern Applications

0
94


NLP has gone from rule based systems to generative systems with almost human level accuracy along multiple rubrics within 40 years. This is incredible considering we were so far off naturally talking to a computer system even just ten years ago; now I can tell Google Home to turn off my sitting room lights.

In the Stanford Lecture by Chris Manning introduces a Computer Science class to what NLP is, its complexity and specific toolings such as word2vec which enable learning systems to learn from natural language. Professor Manning is the Director of the Stanford Artificial Intelligence Laboratory and is a leader in applying Deep Learning (DL) to NLP.

The goal of NLP is to allow computers to ‘understand’ natural language in order to perform tasks and support the human user to make decisions. For a logic system, understanding and representing the meaning of language is a “difficult goal”. The goal is so compelling all major technology firms have put huge investment into the field. The lecture focuses on these areas of the NLP challenge.

Some applications which you might encounter NLP systems are spell checking, search, recommendations, speech recognition, dialog agents, sentiment analysis and translation services. One key point Chris Manning explains is that human language (either text, speech or movement) is unique in that it is done to communicate something, some ‘meaning’ is embedded in the action. This is not often the case with anything else that generates data. Its data with intent, extracting and understanding the intent is part of the NLP challenge. Chris Manning also lists “Why NLP is hard” which I think we take for granted.

Language interpretation depends on ‘common sense’ and contextual knowledge, language is ambiguous (computers like direct, formal statements!), language contains a complex mix of situational, visual and linguistic knowledge from various timelines. Learning systems we have now do not have a lifetime of learned weights and bias so can only currently be applied in narrow-AI use cases.

The Stanford lecture also dives into DL and how it is different to a human exploring and designing features or signals to then apply to the learning systems. The lecture discusses the first spark of DL with speech recognition from work done by George Dahl and how the DL approach got a 33% increase in performance compared to traditional feature modelling. Professor Manning also talks about how NLP and DL have added capabilities in three segments, namely what he calls Levels; speech, words, syntax and semantics. Tools; parts-of-speech, entities and parsing and Applications; machine translation, sentiment analysis, dialogue agent and question answering. Stating NLP + DL have created a ‘Few key tools’ which have wide applications.

Words as vectors — https://youtu.be/8rXD5-xhemo?t=2346

Towards the end of the lecture we explore the ideas around how words are represented as numbers in vector spaces and how this applies to NLP and DL. Word meaning vectors then are usable to represent meaning in words, sentences and beyond.



Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here