Do Intelligent Robots Need Emotion?

What's your opinion?

Crowdsourcing a Word-Emotion Association Lexicon

.

A:

Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. 

In this paper, we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. 

We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. 

Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help to identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help to obtain annotations at sense level (rather than at word level). 

We conducted experiments on how to formulate the emotion-annotation questions, and show that asking if a term is associated with an emotion leads to markedly higher interannotator agreement than that obtained by asking if a term evokes an emotion.

.

K:

Emotions, affect, polarity, semantic orientation, crowdsourcing, Mechanical Turk, emotion lexicon, polarity lexicon, word–emotion associations, sentiment analysis.

.

P:

https://onlinelibrary.wiley.com/doi/10.1111/j.1467-8640.2012.00460.x

.

D:

https://arxiv.org/pdf/1308.6297.pdf

.

S:

https://www.semanticscholar.org/paper/CROWDSOURCING-A-WORD%E2%80%93EMOTION-ASSOCIATION-LEXICON-Mohammad-Turney/54227c063bb04489caffd65ff9fc6218788ddb25

.

R:

https://www.researchgate.net/publication/256199465_Crowdsourcing_a_Word-Emotion_Association_Lexicon

.

G:

https://app.razzi.my/findgref?gid=15LQdmmfNMwhK7jb-4zkPkKOjZ0apm6fz

.






Read More

Google Ngram Viewer

.

The Google Ngram Viewer or Google Books Ngram Viewer is an online search engine that charts the frequencies of any set of search strings using a yearly count of n-grams found in sources printed between 1500 and 2019 in Google's text corpora in English, Chinese (simplified), French, German, Hebrew, Italian, Russian, or Spanish. There are also some specialized English corpora, such as American English, British English, and English Fiction.

.

The program can search for a word or a phrase, including misspellings or gibberish. The n-grams are matched with the text within the selected corpus, optionally using case-sensitive spelling (which compares the exact use of uppercase letters), and, if found in 40 or more books, are then displayed as a graph.

.

The Google Ngram Viewer supports searches for parts of speech and wildcards. It is routinely used in research.

.

The program was developed by Jon Orwant and Will Brockman and released in mid-December 2010. It was inspired by a prototype called "Bookworm" created by Jean-Baptiste Michel and Erez Aiden from Harvard's Cultural Observatory and Yuan Shen from MIT and Steven Pinker.

.

The Ngram Viewer was initially based on the 2009 edition of the Google Books Ngram Corpus. As of July 2020, the program supports 2009, 2012, and 2019 corpora.

.

Operation and restrictions

Commas delimit user-entered search-terms, indicating each separate word or phrase to find. The Ngram Viewer returns a plotted line chart within seconds of the user pressing the Enter key or the "Search" button on the screen.

.

As an adjustment for more books having been published during some years, the data are normalized, as a relative level, by the number of books published in each year.

.

Due to limitations on the size of the Ngram database, only matches found in at least 40 books are indexed in the database; otherwise the database could not have stored all possible combinations.

.

Typically, search terms cannot end with punctuation, although a separate full stop (a period) can be searched. Also, an ending question mark (as in "Why?") will cause a second search for the question mark separately.

.

Omitting the periods in abbreviations will allow a form of matching, such as using "R M S" to search for "R.M.S." versus "RMS".

.

Corpora

The corpora used for the search are composed of total_counts, 1-grams, 2-grams, 3-grams, 4-grams, and 5-grams files for each language. The file format of each of the files is tab-separated data. Each line has the following format:

.

total_counts file

year TAB match_count TAB page_count TAB volume_count NEWLINE

Version 1 ngram file (generated in July 2009)

ngram TAB year TAB match_count TAB page_count TAB volume_count NEWLINE

Version 2 ngram file (generated in July 2012)

ngram TAB year TAB match_count TAB volume_count NEWLINE

The Google Ngram Viewer uses match_count to plot the graph.

.

As an example, a word "Wikipedia" from the Version 2 file of the English 1-grams is stored as follows:

ngram year  match_count volume_count

Wikipedia 1904  1 1

Wikipedia 1912  11  1

Wikipedia 1924  1 1

Wikipedia 1925  11  1

Wikipedia 1929  11  1

Wikipedia 1943  11  1

Wikipedia 1946  11  1

Wikipedia 1947  11  1

Wikipedia 1949  11  1

Wikipedia 1951  11  1

Wikipedia 1953  22  2

Wikipedia 1955  11  1

Wikipedia 1958  1 1

Wikipedia 1961  22  2

Wikipedia 1964  22  2

Wikipedia 1965  11  1

Wikipedia 1966  15  2

Wikipedia 1969  33  3

Wikipedia 1970  129 4

Wikipedia 1971  44  4

Wikipedia 1972  22  2

Wikipedia 1973  1 1

Wikipedia 1974  2 1

Wikipedia 1975  33  3

Wikipedia 1976  11  1

Wikipedia 1977  13  3

Wikipedia 1978  11  1

Wikipedia 1979  112 12

Wikipedia 1980  13  4

Wikipedia 1982  11  1

Wikipedia 1983  3 2

Wikipedia 1984  48  3

Wikipedia 1985  37  3

Wikipedia 1986  6 4

Wikipedia 1987  13  2

Wikipedia 1988  14  3

Wikipedia 1990  12  2

Wikipedia 1991  8 5

Wikipedia 1992  1 1

Wikipedia 1993  1 1

Wikipedia 1994  23  3

Wikipedia 1995  4 1

Wikipedia 1996  23  3

Wikipedia 1997  6 1

Wikipedia 1998  32  10

Wikipedia 1999  39  11

Wikipedia 2000  43  12

Wikipedia 2001  59  14

Wikipedia 2002  105 19

Wikipedia 2003  149 53

Wikipedia 2004  803 285

Wikipedia 2005  2964  911

Wikipedia 2006  9818  2655

Wikipedia 2007  20017 5400

Wikipedia 2008  33722 6825

The graph plotted by the Google Ngram Viewer using the above data is here:

https://books.google.com/ngrams/graph?content=Wikipedia&year_start=1900&year_end=2020&corpus=15&smoothing=0

.

Criticism

The data set has been criticized for its reliance upon inaccurate OCR, an overabundance of scientific literature, and for including large numbers of incorrectly dated and categorized texts. Because of these errors, and because it is uncontrolled for bias (such as the increasing amount of scientific literature, which causes other terms to appear to decline in popularity), it is risky to use this corpus to study language or test theories. Since the data set does not include metadata, it may not reflect general linguistic or cultural change and can only hint at such an effect.

.

Guidelines for doing research with data from Google Ngram have been proposed that address many of the issues discussed above.

.

OCR issues

Optical character recognition, or OCR, is not always reliable, and some characters may not be scanned correctly. In particular, systemic errors like the confusion of "s" and "f" in pre-19th century texts (due to the use of the long s which was similar in appearance to "f") can cause systemic bias. Although Google Ngram Viewer claims that the results are reliable from 1800 onwards, poor OCR and insufficient data mean that frequencies given for languages such as Chinese may only be accurate from 1970 onward, with earlier parts of the corpus showing no results at all for common terms, and data for some years containing more than 50% noise.

.

REF:

https://en.wikipedia.org/wiki/Google_Ngram_Viewer

https://ai.googleblog.com/2006/08/all-our-n-gram-are-belong-to-you.html

Read More

Information retrieval and Information extraction in Web 2.0 environment

 

.

ABSTRACT:

With the rise of Web 2.0 paradigm new trends in information retrieval (IR) and information extraction (IE) can be observed. Significance of IR/IE as fundamental method of acquiring new and up-to-date information is crucial for efficient decision making. 

Social aspects of modern information retrieval are gaining on its importance over technical aspects. The main reason for this trend is that IR and IE services are becoming more and more widely available to end users that are not information professionals but regular users. Also new methods that rely primarily on user interaction and communication show similar success in IR and IE tasks. 

Web 2.0 has overall positive impact on IR and IE as it is based on a more structured data platform than the earlier web. Moreover, new tools are being developed for online IE services that make IE more accessible even to users without technical knowledge and background. 

The goal of this paper is to review these trends and put them into context of what improvements and potential IR and IE have to offer to knowledge engineers, information workers, but also typical Internet users.

.

KEYWORD:

Information extraction, Information retrieval, Web 2.0, Social bookmarking, Mashups, Folksonomies. 

.

PUBLINK:

https://www.bib.irb.hr/480921?rad=480921

.

DOCLINK:

https://www.naun.org/main/NAUN/computers/19-556.pdf

.

SEMLINK:

https://www.semanticscholar.org/paper/Information-Retrieval-and-Information-Extraction-in-Vlahovic/d91a84ac6f8c752b610cd9a6678c2e5057668c6e

.


Read More

Natural Language Processing: part 1 of lecture notes

.

Lecture Synopsis

Aims

This course aims to introduce the fundamental techniques of natural language processing, to develop an understanding of the limits of those techniques and of current research issues, and evaluate some current and potential applications.

• Introduction. Brief history of NLP research, current applications, generic NLP system architecture, knowledgebased versus probabilistic approaches.

• Finite state techniques. Inflectional and derivational morphology, finite-state automata in NLP, finite-state transducers.

• Prediction and part-of-speech tagging. Corpora, simple N-grams, word prediction, stochastic tagging, evaluating system performance.

• Parsing and generation I. Generative grammar, context-free grammars, parsing and generation with contextfree grammars, weights and probabilities.

• Parsing and generation II. Constraint-based grammar, unification, simple compositional semantics.

• Lexical semantics. Semantic relations, WordNet, word senses, word sense disambiguation.

• Discourse. Anaphora resolution, discourse relations.

• Applications. Machine translation, email response, spoken dialogue systems.

.

https://www.cl.cam.ac.uk/teaching/2002/NatLangProc/nlp1-4.pdf

Read More

Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews


 

.

A:

This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). 

The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. 

A phrase has a positive semantic orientation when it has good associations (e.g., "subtle nuances") and a negative semantic orientation when it has bad associations (e.g., "very cavalier"). 

In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word "excellent" minus the mutual information between the given phrase and the word "poor". 

A review is classified as recommended if the average semantic orientation of its phrases is positive. 

The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations).

 The accuracy ranges from 84% for automobile reviews to 66% for movie reviews.

.

C:

Turney, P.D. (2002). Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews. ACL.

.

K:


.

P:

https://aclanthology.org/P02-1053/

.

D:

https://aclanthology.org/P02-1053.pdf

.

S:

https://www.semanticscholar.org/paper/Thumbs-Up-or-Thumbs-Down-Semantic-Orientation-to-of-Turney/9e7c7853a16a378cc24a082153b282257a9675b7

.

R:

https://www.researchgate.net/publication/2537987_Thumbs_Up_or_Thumbs_Down_Semantic_Orientation_Applied_to

.

G:

https://app.razzi.my/findgref?gid=1MSx8ggcFJ9yOAVIw7WAXkIWSUlsIMapm

.

Read More

Can robots have emotions?

 

.

Can robots have emotions?
By: Dylan Evans

.

Science fiction is full of machines that have feelings. In 2007: A Space Odyssey, the onboard computer turns against the crew of the spaceship Discovery 1, and utters cries of pain and fear when his circuits are finally taken apart. In Blade Runner, a humanoid robot is distressed to learn that her memories are not real, but have been implanted in her silicon brain by her programmer. In Bicentennial Man, Robin Williams plays the part of a robot who redesigns his own circuitry so that he can experience the full range of human feelings.

.

These stories achieve their effect in part because the capacity for emotion is often considered to be one of the main differences between humans and machines. This is certainly true of the machines we know today. The responses we receive from computers are rather dry affairs, such as ’System error 1378’. People sometimes get angry with their computers and shout at them as if they had emotions, but the computers take no notice. They neither feel their own feelings, nor recognise yours.

.

The gap between science fiction and science fact appears vast, but some researchers in artificial intelligence now believe it is only a question of time before it is bridged. The new field of affective computing has already made some progress in building primitive emotional machines, and every month brings new advances. However, some critics argue that a machine could never come to have real emotions like ours. At best, they claim, clever programming might allow it to simulate human emotions, but these would just be clever fakes. Who is right? To answer this question, we need to say what emotions really are.

.

What are emotions?

.

In humans and other animals, we tend to call behaviour emotional when we observe certain facial and vocal expressions like smiling or snarling, and when we see certain physiological changes such as hair standing on end or sweating. Since most computers do not yet possess faces or bodies, they cannot manifest this behaviour. However, in recent years computer scientists have been developing a range of ’animated agent faces', programmes that generate images of humanlike faces on the computer's visual display unit. These images can be manipulated to form convincing emotional expressions. 

.

Others have taken things further by building three-dimensional synthetic heads. Cynthia Breazeal and colleagues at the Massachusetts Institute of Technology (MIT) have constructed a robot called ’Kismet’ with moveable eyelids, eyes and lips. The range of emotional expressions available to Kismet is limited, but they are convincing enough to generate sympathy among the humans who interact with him. Breazeal invites human parents to play with Kismet on a daily basis. When left alone, Kismet looks sad, but when it detects a human face it smiles, inviting attention. If the carer moves too fast, a look of fear warns that something is wrong. Human parents who play with Kismet cannot help but respond sympathetically to these simple forms of emotional behaviour. 

.

Does Kismet have emotions, then? It certainly exhibits some emotional behaviour, so if we define emotions in behavioural terms, we must admit that Kismet has some emotional capacity. Kismet does not display the full range of emotional behaviour we observe in humans, but the capacity for emotion is not an all-or-nothing thing. Chimpanzees do not display the full range of human emotion, but they clearly have some emotions. Dogs and cats have less emotional resemblance to us, and those doting pet-owners who ascribe the full range of human emotions to their domestic animals are surely guilty of anthropomorphism, but to deny they had any emotions at all would surely be to commit the opposite, and equally egregious, error of anthropocentrism. There is a whole spectrum of emotional capacities, ranging from the very simple to the very complex. Perhaps Kismet's limited capacity for emotion puts him somewhere near the simple end of the spectrum, but even this is a significant advance over the computers that currently sit on our desks, which by most definitions are devoid of any emotion whatsoever. 

.

As affective computing progresses, we may be able to build machines with more and more complex emotional capacities. Kismet does not yet have a voice, but in the future Breazeal plans to give him a vocal system which might convey auditory signals of emotion. Today's speech synthesisers speak in an unemotional monotone. In the future, computer scientists should be able to make them sound much more human by modulating nonlinguistic aspects of vocalisation like speed, pitch and volume. 

.

Facial expression and vocal intonation are not the only forms of emotional behaviour. We also infer emotions from actions. When, for example, we see an animal stop abruptly in its tracks, turn round, and run away, we infer that it is afraid, even though we may not see the object of its fear. For computers to exhibit this kind of emotional behaviour, they will have to be able to move around. In the jargon of artificial intelligence, they will have to be 'mobots’ (mobile robots). 

.

In my lab at the University of the West of England, there are dozens of mobots, most of which are very simple. Some, for example, are only the size of a shoe, and all they can do is find their way around a piece of the floor without bumping into anything. Sensors allow them to detect obstacles such as walls and other mobots. Despite the simplicity of this mechanism, their behaviour can seem eerily human. When an obstacle is detected, the mobots stop dead in their tracks, turn around, and head off quickly in the other direction. To anybody watching, the impression that the mobot is afraid of collisions is irresistible. 

.

Are these mobots really afraid? Or are the spectators, including me, guilty of anthropomorphism? People once asked the same question about animals. Descartes, for example, claimed that animals did not really have feelings like us because they were just complex machines, without a soul. When they screamed in apparent pain, they were just following the dictates of their inner mechanism. Now that we know that the pain mechanism in humans is not much different from that of other animals, the Cartesian distinction between sentient humans and ’machine-like' animals does not make much sense. In the same way, as we come to build machines more and more like us, the question about whether or not the machines have ’real’ emotions or just ’fake' ones will become less meaningful. The current resistance to attributing emotions to machines is simply due to the fact that even the most advanced machines today are still very primitive. 

.

Some experts estimate that we will be able to build machines with complex emotions like ours within fifty years. But is this a good idea? What is the point of building emotional machines? Won't emotions just get in the way of good computing, or even worse, cause computers to turn against us, as they so often do in science fiction? 

.

Why give computers emotions? 

.

Giving computers emotions could be very useful for a whole variety of reasons. For a start, it would be much easier and more enjoyable to interact with an emotional computer than with today's unemotional machines. Imagine if your computer could recognise what emotional state you were in each time you sat down to use it, perhaps by scanning your facial expression. You arrive at work one Monday morning, and the computer detects that you are in a bad mood. Rather than simply asking you for your password, as computers do today, the emotionally-aware desktop PC might tell you a joke, or suggest that you read a particularly nice email first. Perhaps it has learnt from previous such mornings that you resent such attempts to cheer you up. In this case, it might ignore you until you had calmed down or had a coffee. It might be much more productive to work with a computer that was emotionally intelligent in this way than with today's dumb machines. 

.

This is not just a flight of fancy. Computers are already capable of recognising some emotions. Ifran Essa and Alex Pentland, two American computer scientists, have designed a program that enables a computer to recognise facial expressions of six basic emotions. When volunteers pretended to feel one of these emotions, the computer recognised the emotion correctly ninety-eight per cent of the time. This is even better than the accuracy rate achieved by most humans on the same task! If computers are already better than us at recognising some emotions, it is surely not long before they will acquire similarly advanced capacities for expressing emotions, and perhaps even for feeling them. In the future, it may be humans who are seen by computers as emotionally illiterate, not vice versa. 

.

What other applications might there be for emotional computers other than providing emotionally intelligent interfaces for desktop PCs? Rosalind Picard, a computer scientist at the MIT Media Laboratory in Boston, has proposed dozens of possible uses, including the following:

 . 

o Artificial interviewers that train you how to do well in job interviews by giving you feedback on your body language

o Affective voice synthesisers that allow people with speech problems not just to speak, but to speak in genuinely emotional ways

o Frustration monitors that allow manufacturers to evaluate how easy their products are to use

o Wearable computers ('intelligent clothing') that give you feedback on your emotional state so that you can tell when you are getting stressed and need a break 

.

All of these potential applications for emotional machines are resolutely utilitarian, but I think that most emotional machines in the future will be built not for any practical purpose, but purely for entertainment. If you want to envision the future of affective computing, don't think spacecraft and intelligent clothing — think toys and videogames. 

.

Many videogames already use simple learning algorithms to control non-player characters, such as monsters and baddies. In Tomb Raider, for example, the enemies faced by Lara Croft need only a few shots before they cotton on to your shooting style. If you are lying in wait for a dinosaur, it might remain in the shadows, tempting you to come out and take a pot shot so that it can attack you more easily. These are relatively simple programs, but the constant demand for better games means that the software is continually improving. It might well be that the first genuinely emotional computers are games consoles rather than spacecraft. 

.

Other entertainment software with proto-emotional capacities is also available in the form of the virtual pets who live in personal computers. Many kids now keep dogs and cats as screen-pets, and more recently a virtual baby has been launched. A program called the Sims lets you design your own people, but they soon take on a life of their own, which can be fascinating to watch. The Sims are eerily human in their range of emotional behaviour. They get angry, become depressed, and even fall in love. 

.

All these creatures are virtual— they live inside your computer, and their only 'body' is a picture on a screen. However, the first computerised creatures with real bodies are also now coming onto the toy market, and they too have proto-emotional capacities. First came little furry robots called 'Furbies', that fall asleep when tired, and make plaintiff cries when neglected for too long. Now there are also robotic dogs and cats that run around your living room without ever making a mess. There is even a baby doll with a silicon brain and a latex face that screws up into an expression of distress when it needs feeding. As with Kismet, people respond to these artificial life forms with natural sympathy. Their minds are not filled with ponderous doubts about whether these emotions are 'real' or not. They simply enjoy playing with them, as they would with a real kitten or baby. 

.

The gap between science fiction and science fact is closing. Today's computers and robots still have a long way to go before they acquire a full human range of emotions, but they have already made some progress. In order to make further progress, engineers and computer scientists will have to join forces with psychologists. Increasing numbers of psychology students are opting to study robotics and artificial intelligence at university. 

.

The future lies in their hands.

.

This article is a revised and edited version of chapter five in Emotion: The Science of Sentiment, by Dylan Evans (Oxford University Press, 2001). 

.

Thanks are due to Oxford University Press for granting permission to reproduce material from this book. 

.

REFERENCES:

o Breazeal, C. (2002) Designing Sociable Robots. MIT Press. 

o Essa, I. and A. Pentland (1997). “Coding, analysis, interpretation and recognition of facial expressions.” IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7): 757-763. 

o Evans, D. (2001) Emotion. The Science of Sentiment. Oxford University Press. 

o Grand, S. (2004) Growing up with Lucy: How to build an android in twenty easy steps. Weidenfeld & Nicholson. 

o Picard, R. W. (1997). Affective Computing. MIT Press. 

Note on the author: Dylan Evans teaches robotics at the University of the West of England, Bristol. He has written several popular science books, and writes regularly for Guardian and the Evening Standard. In 2001 he was voted one of the twenty best young writers in Britain by the Independent on Sunday , and was recently described by the Guardian as ‘Alain de Botton in a lab coat’. See www.dylan.org.uk. 

.

KEYWORDS:

o affective computing

o mobots

o intelligent clothing

o artificial intelligence

o artificial life 

.

SOURCE:

https://www.inf.ed.ac.uk/events/hotseat/dylan_position.pdf

.

RGTLINK:

https://www.researchgate.net/publication/244106245_Can_robots_have_emotions

.

Read More

From information retrieval to information extraction

.

This paper describes a system which enables users to create on-the-fly queries which involve not just keywords, but also sortal constraints and linguistic constraints. 

The user can specify how the results should be presented e.g. in terms of links to documents, or as table entries. 

The aim is to bridge the gap between keyword based Information Retrieval and pattern based Information Extraction.

.

CITE:
Milward, D., & Thomas, J. (2000). From Information Retrieval to Information Extraction.
.

PUBLINK:

https://dl.acm.org/doi/10.3115/1117755.1117767

.

DOCLINK:

https://aclanthology.org/W00-1109.pdf

.

SEMLINK:

https://www.semanticscholar.org/paper/From-Information-Retrieval-to-Information-Milward-Thomas/5fc646e64ee8137373d3a60b15c68f4666c751e7

.

Read More