Do Intelligent Robots Need Emotion?

What's your opinion?

What Happened To Google Allo?


Google shut down Allo (TheVerge: 5th December 2018).

.

Allo was one of Google’s numerous attempts at creating an instant messaging app able to compete with the giants on the market – Apple’s iMessage and Facebook’s Messenger and WhatsApp.

Allo used phone numbers for identifying users and didn’t require emails or social media accounts. It introduced several additions to the world of messaging, such as Selfie Stickers, Smart Reply, and Google Assistant

.

Google has been trying to find a foothold in the messaging market for years. Starting with Google Talk, going through Hangouts and its ever-growing text, voice, and video features, and finishing with newer apps like Allo, Google Duo, and Google Messenger – Google seems to have tried it all.

.

When Allo came out in September 2016, things looked promising. The app used phone numbers as identifiers, which was good for users who needed a texting app that wasn’t connected to their social media or email. 

.

While Allo offered users many entertaining tools to enhance their messaging experience, it just wasn’t doing all that well in terms of numbers. The peak of downloads was the first 12 weeks after launch when it reached around 10 million downloads, and by the time Google decided to discontinue it, the app had had less than 50 million downloads.


That might seem like a lot, but it simply wasn’t enough compared to the more than a billion people using Facebook’s Messenger monthly or the 2 billion using WhatsApp.

.

One recurring criticism of the service was that when Allo launched in 2016, it was available only on one device since it was connected to the user’s phone number. That didn’t help attract more users, and even though in 2017 Google added the option to have the app on one mobile device and use it on the web, it might have been a bit too late.

.

Finally, the lack of SMS support was a big hindrance. People could use Hangouts to talk on the web and via SMS at the same time, so it was a legitimate question why people would prefer to switch to Allo.

Having this in mind, it’s not much of a surprise that Google decided to redirect time and resources elsewhere. In April 2018, Anil Sabharwal, the new head of Google’s communications group, announced that the tech giant was “pausing” the development of the Allo project and would be focusing on something called Android Messages and the new RCS standards and Chat.


In December 2018, Google announced Allo was to be officially discontinued in March 2019, and users were given the option to save their data beforehand. 

.

Source: https://www.failory.com/google/allo

.

Read More

A Study on Information Retrieval Methods in Text Mining

.

ABSTRACT:

Information in the legal domain is often stored as text in relatively unstructured forms. For example, statutes, judgments and commentaries are typically stored as free documents. Discovering knowledge by the automatic analysis of free text is a field of research that is evolving from information retrieval and is often called text mining. 

Dozier states that text mining is a new field and there is still debate about what its definition should be. Jockson observe that text mining involves discovering something interesting about the relationship between the text and the world. Hearst proposes that text mining involves discovering relationships between the content of multiple texts and linking this information together to create new information. 

Text information retrieval and data mining has thus become increasingly important. In this paper various information retrieval techniques based on Text mining have been presented.

.

KEYWORDS:

Information Retrieval, Information Extraction and Indexing Techniques
.

PUBLINK:

.

DOCLINK:

.

SEMLINK:

.

Read More

How to prepare for writing a research paper on any topic in computer science?

 


.

Start reading the literature.


http://scholar.google.com is your friend. I’ve also found Microsoft Academic is useful. Type in the topic area, whatever it is.


Begin looking at the links that are returned. Read the abstracts. Download the papers that interest you - when I download them I add the data of publication and the title of the paper as the name (because often the names give you no contextual clue as to the contents). It is also a good practice to capture the biblographic data (I grab the BibTeX since I write my CS papers in LaTeX) because you will need it when you start writing.


Start reading the interesting papers. Usually by the time you are done reading the introduction you will know if you want to read further. If you really liked the paper, then use the search engines to find papers that reference the paper you like.


If you are looking for the latest research, restrict your search to the past four years. I usually work backwards from the more recent work to older work. Sometimes I find survey articles, which can be great, usually I will find a few papers that are heavily referenced (100+ references) and that is often indicative of importance of the work.


Summarize those papers. For the ones you thought most relevant try to note what you learned from the paper. What did you like? What did you not like? Were there any areas they failed to address? Did they conflict with other work in the field? If so, why?


If you are looking for a research question try to identify something the paper didn’t do: maybe a technique you think they could have used, or a dataset, or specific equipment. Look to see if any of the other papers have done that. If not, you now have a potential research question: “What happens if we use X when addressing problem Y.” Then think up ways that you can “fail fast” - how can you see if there is merit to that approach. If not, you want to know quickly so you can discard that approach before you spend too much time on it. If it looks promising, then push further and see where your research takes you.


As you do this work, write it up. If you are really disciplined, you will start writing your research paper before you have done the research. That will help you think about how you want to present your findings, which in turn helps you focus on what you need to do to generate the necessary data. Of course, you are likely to find out things don’t work the way that you wanted and you’ll have to rewrite your paper. This way you do have a history of what you did as well. When you’re done, you’ll have a research paper.


That’s the point at which you’ll have to tear it apart. Criticize it. Find its weaknesses. Think about how you will fix those weaknesses or address them (“we did not investigate X and leave it for future work…”) Then ask other people to read it - they won’t have any of your insight so if you aren’t explaining it so they can understand why your work is important (the abstract and introduction!) then you need to go back an rewrite those sections.


At some point you decide you are done with the paper: the time allotted to it has expired, the work has been accepted for publication, you’re sick of it and want to do anything else.


Good luck!


.

Tony Mason, PhD from The University of British Columbia (2022)

.

https://www.quora.com/How-do-I-prepare-for-writing-a-research-paper-on-any-topic-in-computer-science/answer/Tony-Mason-10

.

Read More

Introduction to Word Vectors

 


.

Word vectors will enable breakthroughs in NLP applications and research. They highlight the beauty of neural network DL and the power of learned representations of input data in hidden layers.

.

Word Vectors

Word vectors represent a significant leap forward in advancing our ability to analyze relationships across words, sentences, and documents. In doing so, they advance technology by providing machines with much more information about words than has previously been possible using traditional representations of words. It is word vectors that make technologies such as speech recognition and machine translation possible. There are many excellent explanations of word vectors, but in this one, I want to make the concept accessible to data and research people who aren't very familiar with natural language processing (NLP).


What Are Word Vectors?

Word vectors are simply vectors of numbers that represent the meaning of a word. For now, that's not very clear, but we'll come back to it in a bit. It is useful, first of all, to consider why word vectors are considered such a leap forward from traditional representations of words.


Traditional approaches to NLP, such as one-hot encoding and bag-of-words models (i.e. using dummy variables to represent the presence or absence of a word in an observation, i.e. a sentence), while useful for some machine learning (ML) tasks, do not capture information about a word's meaning or context. This means that potential relationships, such as contextual closeness, are not captured across collections of words. For example, a one-hot encoding cannot capture simple relationships, such as determining that the words "dog" and "cat" both refer to animals that are often discussed in the context of household pets. Such encodings often provide sufficient baselines for simple NLP tasks (for example, email spam classifiers), but lack the sophistication for more complex tasks such as translation and speech recognition. In essence, traditional approaches to NLP such as one-hot encodings do not capture syntactic (structure) and semantic (meaning) relationships across collections of words and, therefore, represent language in a very naive way.


In contrast, word vectors represent words as multidimensional continuous floating point numbers where semantically similar words are mapped to proximate points in geometric space. In simpler terms, a word vector is a row of real-valued numbers (as opposed to dummy numbers) where each point captures a dimension of the word's meaning and where semantically similar words have similar vectors. This means that words such as wheel and engine should have similar word vectors to the word car (because of the similarity of their meanings), whereas the word banana should be quite distant. Put differently, words that are used in a similar context will be mapped to a proximate vector space (we will get to how these word vectors are created below). The beauty of representing words as vectors is that they lend themselves to mathematical operators. For example, we can add and subtract vectors — the canonical example here is showing that by using word vectors we can determine that:


king - man + woman = queen


In other words, we can subtract one meaning from the word vector for king (i.e. maleness), add another meaning (femaleness), and show that this new word vector (king - man + woman) maps most closely to the word vector for queen.


The numbers in the word vector represent the word's distributed weight across dimensions. In a simplified sense, each dimension represents a meaning and the word's numerical weight on that dimension captures the closeness of its association with and to that meaning. Thus, the semantics of the word are embedded across the dimensions of the vector.


A Simplified Representation of Word Vectors

.

https://dzone.com/articles/introduction-to-word-vectors

.

Read More