# The Sentiment Scale



## Jack Dammit (Mar 31, 2019)

> [FONT=lucida_granderegular]Certain words more precisely communicate positive and negative feelings.
> 
> ...word selection can have an enormous impact on how a message is perceived.
> [/FONT]
> [FONT=lucida_granderegular]Does it make any difference whether a presentation went *quite good *versus *pretty good*, or if an earnings report is described as *awful *versus *poor*? As Visual Capitalist's Nick Routley explains, according to a new survey from YouGov, word sentiment isn’t as cut-and-dry as one would expect.[/FONT]


[FONT=lucida_granderegular]




Sentiment Scale Reveals Which Words Pack the Most Punch

Source: Zero Hedge[/FONT]


----------



## Phil Istine (Apr 2, 2019)

Sure thing, appropriate word selection is a great influencer of emotion within a reader.


----------



## Theglasshouse (Apr 2, 2019)

It is how you say things with regards to the earnings report. In writing tone evokes feeling. In real life how you treat people is with how you say things.


----------



## Jack Dammit (Apr 2, 2019)

Phil Istine said:


> Sure thing, appropriate word selection is a great influencer of emotion within a reader.



Thank you, but have you ever seen the effect quantified? 
I haven't.


----------



## Terry D (Apr 2, 2019)

Most of the words on that list aren't ones I would use in fiction to try and achieve an emotional effect. I don't want to tell my readers that a character had a 'terrible day'. I want to walk them through the rain without an umbrella or hat, have them lose their wallet, and get bitten by a dog. All before they get home to find out their girlfriend has left with the building's doorman.


----------



## Jack Dammit (Apr 2, 2019)

Those who sponsored and performed this study weren't concerned with fiction, but that's not the point. Whatever measurement algorithm they devised could be applied to whichever words you want to compare. The truth is out there.


----------



## Terry D (Apr 2, 2019)

In fiction no single word matters regardless of its value in any algorithm. What matters is the effect of words in combination with one another in sentences, paragraphs, and scenes. That's not something that can be analyzed and quantified. For technical writing, or even some other forms of non-fiction there may be value in such a scale, but for fiction it's not, IMO, applicable.


----------



## Jack Dammit (Apr 2, 2019)

Point taken, and thank you for your response. Sentences, paragraphs and scenes are comprised of words in sophisticated combination, yes, and some words, or most words, have a quantifiable effect on the reader in and of themselves, independent of their effect in combination with other words, and quantifying that effect was the purpose of the study. Your assertion that "no single word matters" until it is artfully and therefore intangibly composed into sentences, paragraphs and scenes, misses the trees for the forest. The magic of a work of fiction neither negates nor is immune to the individual and collective effect of its component words. I frankly can't imagine a fiction writer not taking into consideration the standalone effects of individual words while combining those words into sentences, paragraphs and scenes, and the objective analysis of those effects is certainly relevant.

You might thoroughly disagree with the proposition that fiction can be objectified, analyzed and measured, but I propose exactly that, that the effect of individual words is present and measurable in fiction, nonfiction, technical specifications, resumes, recipes, menus, slogans, obituaries, lyrics, street signs, and anything else composed of words. Word effect wouldn't supersede hypothetical sentence, paragraph or scene effects, but such compound effects would be based at least in part on the effects of a sentence's component words. If a sentence effect isn't already quantifiable, it will be soon, and you can extrapolate the measurability of effect right up to the book level, regardless of genre. That might take a while. The NSA is working on it. The effect of a work of fiction is hypothetically quantifiable in that the effect it exerts on readers is quantifiable, if on several levels in several dimensions, and the magnitude of that hypothetical effect can be compared with that of other works of fiction. Those effects will be measurable, in part, based on the effect of single component words, which indeed "matter." No genre is safe.

I had a similar discussion with an accomplished art appraiser friend who got upset when I described his own account of his professional analysis as machine-replicable pattern recognition; he replied that art has subliminal and intangible qualities which a machine cannot recognize, but however subliminal and intangible the art, its effect is registered on the highly complex network within the human brain, which, upon further research, will prove, although subliminal, quite tangible and measurable. The difference between a counterfeit painting which doesn't trigger your emotions and an authentic painting which does can be analyzed, not necessarily by measurable properties of the painting, but by measurable properties of your reaction. This discussion didn't consider the reduction of artworks into objective components such as colors, proportions, brushstrokes, media, etc. and the collision of art and AI might trigger a civil war, but the analogy is appropriate.

DISCLOSURE: This response was composed by a machine and subsequently plagiarized by Jack Dammit.


----------



## luckyscars (Apr 2, 2019)

Being this is a creative writing forum, I think it's more than whether the words themselves have an effect on the reader within the scope of a single word on a one-dimensional 'good/bad' metric.

Just a couple of observations:

- This list does not take into account appropriateness of usage and roughly treats all these words as homogeneous in practical meaning which they are not. It would be appropriate, for example, to describe a meal as 'below average' but you would hardly describe a person or even a painting using such sterile terminology _even if it possibly warranted it_. For one thing, a term like 'below average' indicates the presence of 'an average' which may not actually exist in that particular context for that particular thing.

In the same manner most people wouldn't describe a past relationship as 'unsatisfactory' but they may well describe it as 'rubbish' (the next word on the list) because 'unsatisfactory' is only usually used to describe certain things. Rubbish is more broad in application. And, of course, 'very bad' is broadest of all.

- This list does not take into account frequency of usage. "Terrible" may or may not be a more impacting word than "abysmal" but its only impacting if it isn't used repeatedly.


----------



## Jack Dammit (Apr 2, 2019)

luckyscars said:


> Being this is a creative writing forum, I think it's more than whether the words themselves have an effect on the reader within the scope of a single word on a one-dimensional 'good/bad' metric.



That was the object of the study. Take it or leave it. The scale is not a dichotomous judgement between good and bad as you suggest; it's a scale. 

The measured effect of a word is significant within a creative context. 



luckyscars said:


> Just a couple of observations:
> 
> - This list does not take into account appropriateness of usage and roughly treats all these words as homogeneous in practical meaning which they are not.



True.



luckyscars said:


> - This list does not take into account frequency of usage. "Terrible" may or may not be a more impacting word than "abysmal" but its only impacting if it isn't used repeatedly.



True, but the study didn't address frequency of usage. Please stand by for further research. Furthermore, impactful.


----------



## luckyscars (Apr 3, 2019)

Jack Dammit said:


> That was the object of the study. Take it or leave it. The scale is not a dichotomous judgement between good and bad as you suggest; it's a scale.
> 
> The measured effect of a word is significant within a creative context.



So, when I see studies like this I wonder two things:

1: Is the information on intrigue in and of itself? Or: Does it offer an insight into 'something that was not previously obvious'?
2: How can this information help with the craft?

Like anything else, your mileage may vary, but I am slightly lost here... 

Regarding Point 1, I actually didn't need a chart to tell me that 'very bad' registers as overwhelmingly negative, 'perfect' as overwhelmingly positive, and the rest in between more or less in line with their definitions and I suspect most native English speakers would not need this. The only thing of _some _interest was that something described as 'very bad' is considered to be slightly (though by no means drastically) worse than something described as 'terrible'. It appears to be a very small difference, however. I wouldn't put money on it making a difference in reader impact in any significant way. None of these words were wildly out of kilter with their place in common parlance that I could see...

Regarding Point 2, you say _the measured effect of a word is significant within a creative context_. Okay, fine, you're probably right, but I want to know how does _this_ study help? I already pointed out that these words aren't based on any one prompt, nor have wholly synonymous meanings, therefore their application will differ, therefore their effect would hardly be comparable. Describing somebody's face as 'terrible' may well achieve a more drastic effect than describing somebody's face as 'very bad'. Equally, describing somebody's workmanship as 'bad' or even 'pretty bad' may well be far more cutting than describing their actions as merely 'unsatisfactory' and yet this chart lists 'unsatisfactory' as being more negative than both?

Because there is no context provided in which to measure these words (and presumably wasn't to those who responded to the question?) and it seems demonstrably true that the sheer number of variables mean few of these words will achieve a consistent outcome in terms of effect, there is simply no way to use this list. More importantly, I can't honestly see how anybody could use them, other than maybe somebody who was learning English. I'm not saying it's pointless much less wrong, only that it seems to be rather irrelevant to _creative writing_. Maybe in the field of second language acquisition it could carry more significance. I don't know.

In a previous post you said: "I frankly can't imagine a fiction writer not taking into consideration the standalone effects of individual words while combining those words into sentences, paragraphs and scenes, and the objective analysis of those effects is certainly relevant." I don't think anybody would argue with that, however focusing on 'sentiment scale' as a way to choose words may not actually improve the end product because there are other factors that are far more important in achieving emotive impact. 

"The Fantastic Four" is clearly a better and more inspirational name for a group of superheroes than "The Perfect Four" and yet 'fantastic' is several places below 'perfect' on your scale - so why is 'fantastic' better than 'perfect' in that case? Well, because it sounds better, for one, and I often pick words based on how they sound in compound with those around them. This study does not take into account this whatsoever. 

That's not the study's fault, BTW, but I am concerned with applying the information in a story. What about the narrative voice? It is far more important, IMO, to choose a word that reflects what a given character would actually say than one that, independently, fits the sentimental level intended. 'Rubbish' hardly ever features in US English, so using that will likely be weird. A five year old kidnapped child would sound ridiculous speaking of their abductor as 'the abysmal man' but may well say 'a bad man'. All these are factors that, in real writing, would need to be weighed against the 'standalone effects' and in most cases I suspect would eclipse them. 

My personal preference, for what it is worth, is to disregard measuring the value of words. It's just too big of a minefield for all the reasons I mentioned above. I think this is one of those few things in life where there really is not objectivity. 

As a man in his forties, I'm not about to describe my father's sudden death as 'very bad'. I don't care if that's what the study says is the most extreme negative term. The study is simple wrong _in that case_. It doesn't fit me and how I want to describe my feelings.


----------



## Jack Dammit (Apr 3, 2019)

luckyscars said:


> So, when I see studies like this I wonder two things:
> 
> 1: Is the information on intrigue in and of itself? Or: Does it offer an insight into 'something that was not previously obvious'?



The techniques and processes demonstrated by "studies like this" may not be terribly intriguing in and of themselves, but the presently obvious insight is how this technique can develop rapidly into a technology beyond intrigue.



luckyscars said:


> 2: How can this information help with the craft?
> 
> Like anything else, your mileage may vary, but I am slightly lost here...



You may be lost because you're fixated on a primitive method described in an article in a Bulgarian-owned economics-oriented conspiracy blog. My bad. When introducing the sentiment scale, I should have mentioned that like everyone else, I'm not terribly impressed with its present form, only imagining and celebrating its future.



luckyscars said:


> Regarding Point 1, I actually didn't need a chart to tell me that 'very bad' registers as overwhelmingly negative, 'perfect' as overwhelmingly positive, and the rest in between more or less in line with their definitions and I suspect most native English speakers would not need this. The only thing of _some _interest was that something described as 'very bad' is considered to be slightly (though by no means drastically) worse than something described as 'terrible'. It appears to be a very small difference, however. I wouldn't put money on it making a difference in reader impact in any significant way. None of these words were wildly out of kilter with their place in common parlance that I could see...



The sentiment scale is a very blunt instrument wielded through cataracts in the dark; however, it demonstrates a potential to evolve into something far more useful and interesting. It may not be of appreciable use to native English speakers at present, but someday it will be.



luckyscars said:


> Regarding Point 2, you say _the measured effect of a word is significant within a creative context_. Okay, fine, you're probably right, but I want to know how does _this_ study help? I already pointed out that these words aren't based on any one prompt, nor have wholly synonymous meanings, therefore their application will differ, therefore their effect would hardly be comparable. Describing somebody's face as 'terrible' may well achieve a more drastic effect than describing somebody's face as 'very bad'. Equally, describing somebody's workmanship as 'bad' or even 'pretty bad' may well be far more cutting than describing their actions as merely 'unsatisfactory' and yet this chart lists 'unsatisfactory' as being more negative than both?



As someone suggested upstream, the sentiment scale as illustrated was probably devised to achieve maximum impact in corporate communication. Please don't get hung up on the limitations of this single primitive error-ridden implementation of a larger idea. A face can be described as terrible, and a wound on that face can appropriately be described as very bad. The issue is context, insurmountable by neither analysis nor, eventually, recognition. 



luckyscars said:


> luckyscars said:
> 
> 
> > Because there is no context provided in which to measure these words (and presumably wasn't to those who responded to the question?) and it seems demonstrably true that the sheer number of variables mean few of these words will achieve a consistent outcome in terms of effect, there is simply no way to use this list. More importantly, I can't honestly see how anybody could use them, other than maybe somebody who was learning English. I'm not saying it's pointless much less wrong, only that it seems to be rather irrelevant to _creative writing_. Maybe in the field of second language acquisition it could carry more significance. I don't know.
> ...


----------



## Terry D (Apr 3, 2019)

Jack Dammit said:


> Everyone can stomp on my precious sentiment scale and ridicule me for what I insist on seeing in it as much as they need to. I studied AI during its dark ages, but I know enough about the field to see where this is going, and I like it. Lexical AI is a combined flood dose of puppies, ice cream, and isoamyl nitrite. Little did I know that it would be the apple of discord on WF, but I can defend it and explain it as long as people ask interesting questions or call me interesting names.



I don't think there's any stomping, or ridiculing, going on, just a discussion of the merits of the sentiment scale and the quantification of the power of words. Disagreement isn't disparagement. I've read about the potential for AI to create everything from art, to music, to literature and my aggregate take-away has always been, "So what?" It may be that the technology will develop to the point where a machine can write a book indistinguishable from something written by a human (some will say James Patterson is already doing that). What AI will never be able to replicate is a writer's ability to communicate his/her own interpretation of the human condition. A machine can't generate genuine empathy or insight. A machine can't have vision.

So, bring on the first big AI blockbuster novel. It all comes back to, "So what?" It can never write my book, based on my experiences, with my vision, or in my voice. AI generation of art and literature will never be anything more than an interesting side-bar, just as Big Blue was a brief flash in the world of chess.


----------



## -xXx- (Apr 3, 2019)

Jack Dammit said:


> Thank you, but have you ever seen the effect quantified?
> I haven't.



i'm going to say yes.
the context was lingerie and fragrance, brand to target market.
ref threads here "exploring genre".
pretty sure king's generative process reli(ed) heavily on
this concept group.

i'll try to find the specific study.


----------



## Jack Dammit (Apr 3, 2019)

Terry D said:


> I don't think there's any stomping, or ridiculing, going on, just a discussion of the merits of the sentiment scale and the quantification of the power of words. Disagreement isn't disparagement.



Of course not. I have a poorly chosen and widely misunderstood sense of humor.

"So what?" is an enlightened philosophy. You use or depend on AI in a dozen ways without knowing it. As far as empathy, insight, vision and chess... wait. Two generations ago, the suggestion that computers could generate social media fe... that cars would drive themselves in the second decade of the 21st century would be laughed out of town. Big Blue wasn't merely a "brief flash"... it was a step in an incremental process. I'm surprised you don't recognize it as such. I can't address chess because a six-year-old can beat me at chess. The Patterson joke was good.


----------



## Jack Dammit (Apr 3, 2019)

-xXx- said:


> i'm going to say yes.
> the context was lingerie and fragrance, brand to target market.
> ref threads here "exploring genre".
> pretty sure king's generative process reli(ed) heavily on
> ...



I'm going to say that I eagerly await your fragrance.


----------



## -xXx- (Apr 3, 2019)

Jack Dammit said:


> I have a poorly chosen and widely misunderstood sense of humor.


you say this like it's a _bad_ thing.


so it may take me a while to identify the specific study.
my apologies.
don't take this personally,
but is NaPo2019
and
april is packed with commitments for me.

i was correct in recalling the emotional bonding/branding, etc
common sense is, ummm, challenging...
stuff
things

slipstream.
might not be absolute,
but seems a _good_ fit.

i'll drop to this thread
when i have detail(s).


----------



## luckyscars (Apr 3, 2019)

Jack Dammit said:


> The techniques and processes demonstrated by "studies like this" may not be terribly intriguing in and of themselves, but the presently obvious insight is how this technique can develop rapidly into a technology beyond intrigue.
> 
> You may be lost because you're fixated on a primitive method described in an article in a Bulgarian-owned economics-oriented conspiracy blog. My bad. When introducing the sentiment scale, I should have mentioned that like everyone else, I'm not terribly impressed with its present form, only imagining and celebrating its future.
> 
> ...



I am trying to imagine contexts in which a sentiment scale (or some derivative of one could be useful). I agree with Terry D that AI generally probably IMO isn't going to be very useful in 'proper literature'. Because whatever AI can do it almost certainly cannot replicate the human experience in words no matter how good the algorithms are. It is fundamentally illogical to suggest something non-human can offer an original insight into that which is human. It could feasibly simulate aspects of it and create something well-executed though inherently derivative.

 I have seen computer simulations of 'dog vision' where images are corrupted to imitate what we suppose, through observation, a dog's visual and audible perspectives could be. But a simulation is just that, a simulation. I can see AI writing children's books. I can see AI writing trash romance, low-brow splatter horror, and erotica. I can see it writing those kinds of books because those kinds of books don't generally delve very deeply into the human condition and originality is usually not important. I don't actually think there would ever be much of a demand for it, though, because we have more than enough writers. But that's a different question.

I can imagine AI writing and use of a sentimentality scale _could _be useful in high-volume tasks where you want to achieve a fairly consistent outcome of written quality within a narrow scope. Let's say obituaries. A computer could do a good job of writing obituaries because these tend to (1) Focus on a narrow thematic field (you don't have the problem of a word meaning something totally different between two contexts) and (2) Tend to be extremely repetitious in terms of language. There isn't much of a demand for originality with most people's obituaries. But I don't really consider that creative writing for the purposes of this forum ...


----------



## Terry D (Apr 3, 2019)

Jack Dammit said:


> Of course not. I have a poorly chosen and widely misunderstood sense of humor.
> 
> "So what?" is an enlightened philosophy. You use or depend on AI in a dozen ways without knowing it. As far as empathy, insight, vision and chess... wait. Two generations ago, the suggestion that computers could generate social media fe... that cars would drive themselves in the second decade of the 21st century would be laughed out of town. Big Blue wasn't merely a "brief flash"... it was a step in an incremental process. I'm surprised you don't recognize it as such. I can't address chess because a six-year-old can beat me at chess. The Patterson joke was good.



You shouldn't conflate my scepticism of the value of AI in a creative capacity with disregarding the role it plays in the technologies we use every day. My, "So what?" comment simply means I don't care if an AI can write a book. It's not important. It doesn't advance the human condition in any way. To use your own word, AI may be able to entertain, but it will never be able to "enlighten". A computer may be able to simulate a writer's voice, it may even be able to be programmed to create synthetic creativity, but it will never really ever have anything to say.


----------



## Jack Dammit (Apr 3, 2019)

luckyscars said:


> It is fundamentally illogical to suggest something non-human can offer an original insight into that which is human.



Welcome to AI, and wait.



luckyscars said:


> I have seen computer simulations of 'dog vision' where images are corrupted to imitate what we suppose, through observation, a dog's visual and audible perspectives could be.



I have dog vision.


----------



## -xXx- (Apr 3, 2019)

_*poetic flow disruption alert*_


Terry D said:


> My, "So what?" comment simply means I don't care if an AI can write a book. It's not important. It doesn't advance the human condition in any way. To use your own word, AI may be able to entertain, but it will never be able to "enlighten". A computer may be able to simulate a writer's voice, it may even be able to be programmed to create synthetic creativity, but it will never really ever have anything to say.


k.
so there are some horror writers here.
imagine a psychset reader type
that is biochemically addicted
to the specific individual terms,
strings of phrases
and
contextual framework.

emotional (biochemical response)
impact
criteria (target demographic).

_think, charismatic leader._
i'll label this hawaiian white ginger.
but i don't write horror.


----------



## -xXx- (Apr 3, 2019)

Jack Dammit said:


> [FONT=lucida_granderegular]
> 
> View attachment 23519
> 
> ...



redaction prevention week
it's new
jussayin'


----------



## -xXx- (Apr 3, 2019)

tyler durden.
thought i recognized the _lean_.
wowser!
tooo bad, that.


----------



## luckyscars (Apr 3, 2019)

Jack Dammit said:


> Welcome to AI, and wait.



Wait for what, exactly? 

Look I'm as big a fan of AI as the next guy and not usually one to dismiss things as being impossible just because I cannot fathom them...but do you have any _evidence_ that the technology required to artificially replicate what it means to be human - that is, what it means to be born, grow up, live, grow old and die AND ALSO write about it using an original, engaging voice on the same level as any given Bestselling author - is anywhere within the conceptual, let alone actual, reach of technological capacities? Who is working on this and how far have they got?

I can't even get my Amazon Alexa to look up a Billy Joel song without specific, clear instruction. I still can't buy rutabaga from a self-checkout machine without the thing freezing up and some human having to waddle over to put it right. My Roomba gets confused. Again, that doesn't mean there isn't a future in this stuff, there definitely is, but evidently this 'intelligence' seems still in its infancy and not yet vaguely competent in a multitude of tasks, even basic ones. 

Yet you still think it's an absolute 100% guarantee that AI is going to take over the most intellectually complex and archaic of human artistic expression - writing - within our approximate lifetimes? Based on what? 100 years ago a paper was put out proposing that by the 21st century everybody would have a tunnel leading from their house through which milk would be delivered to avoid grocery shopping. Never happened. Probably never will happen (not with tunnels anyway). Just because it's a good idea doesn't mean it's inevitable.


----------



## Jack Dammit (Apr 3, 2019)

luckyscars said:


> I can't even get my Amazon Alexa to look up a Billy Joel song without specific, clear instruction.



Yeah, machines typically require specific, clear instructions, at least for the time being, and Alexa blows.



luckyscars said:


> I still can't buy rutabaga from a self-checkout machine without the thing freezing up and some human having to waddle over to put it right.



...because rutabaga blows. Or rutabagas blow. So much for specific, clear instructions.



luckyscars said:


> My Roomba gets confused.



Your Roomba might be operating in an inherently confusing environment. I'm already confused.



luckyscars said:


> Yet you still think it's an absolute 100% guarantee that AI is going to take over the most intellectually complex and archaic of human artistic expression - writing - within our approximate lifetimes?



Don't put thoughts in my brain, sir. I never specified a timeframe, only a certainty in the eventual outcome of a trend. I think the environment, the infrastructure and the economy are in a race to collapse first, so we may never see my magic writing machine, to everyone's disappointment.


----------



## Jack Dammit (Apr 3, 2019)

-xXx- said:


> redaction prevention week
> it's new
> jussayin'



I'm not ashamed to admit that I have no idea what you're talking about.


----------



## Jack Dammit (Apr 3, 2019)

Redacted


----------



## luckyscars (Apr 3, 2019)

Jack Dammit said:


> Don't put thoughts in my brain, sir. I never specified a timeframe, only a certainty in the eventual outcome of a trend. I think the environment, the infrastructure and the economy are in a race to collapse first, so we may never see my magic writing machine, to everyone's disappointment.



You told me to wait, so I assumed you were predicting this as some sort of relatively imminent thing. 

So, in a nutshell, you are 100% sure that at some point between now and the infinite future Artificial Intelligence will be able to write a decent book...and if it doesn't it's because we have not waited long enough or the world ended too soon? It's also certain that on a long enough timescale the Cleveland Browns will win the Superbowl. As Terry says, 'so what?'

No offence, Jack, but this is the kind of nebulous woo-woo I find unimpressive. Essentially speculating about these kinds of vague ideas boils down to resembling the talk of any fundamentalist preacher. You know, the type who is 100% sure Jesus will come back soon, they just can't tell you _when _or _how_, and if anyone dares try to press for specifics they either get defensive or punt the issue back to the ether of 'eventually someday'. I don't have much else to say about it, unfortunately.


----------



## Jack Dammit (Apr 4, 2019)

No offence, Scars, but I don't find it unfortunate that you don't have much else to say about it.


----------



## epimetheus (Apr 4, 2019)

Terry D said:


> ...but it will never be able to "enlighten". A computer may be able to simulate a writer's voice, it may even be able to be programmed to create synthetic creativity, but it will never really ever have anything to say.



Why not? What do humans have that AI can never have that allows this quality?

I agree AI won't be able to give insights into the human condition in the same way a human can, but they could give an outsiders perspective of the human condition and could give insights into the 'AI condition'. Whether that is interesting or not is another question. 


As for the OP: i can't see the value of such data in learning algorithms. Maybe as Bayesian priors in some kind of probabilistic neural network? Specifically how do imagine this data being incorporated into machine learning?


----------



## Terry D (Apr 4, 2019)

epimetheus said:


> Why not? What do humans have that AI can never have that allows this quality?



Self-awareness, an understanding of our own mortality, un-programmed curiosity, morality, love, hate, compassion.


----------



## Kevin (Apr 4, 2019)

Terry D said:


> Self-awareness, an understanding of our own mortality, un-programmed curiosity, morality, love, hate, compassion.


...empathy


----------



## epimetheus (Apr 4, 2019)

Terry D said:


> Self-awareness, an understanding of our own mortality, un-programmed curiosity, morality, love, hate, compassion.



But what specifically precludes AI from potentially feeling such things? Or, put another way, what in humans makes it possible for us to feel such things?


----------



## luckyscars (Apr 4, 2019)

epimetheus said:


> But what specifically precludes AI from potentially feeling such things? Or, put another way, what in humans makes it possible for us to feel such things?



It's not that AI is or is not 'precluded', it's that the level of intricacy that goes into creating the human experience extends beyond the realm of 'intelligence'.

For example, AI does not have a genetic code, and you can't create one from scratch. Not with any process known to man, because nobody knows how to create a functional genome out of nothing - we inherit all of our human coding, which ultimately evolved from a source we don't know, through a process we still don't completely understand and likely never will, because it's damn complicated. 

A lot of human behavior is dictated not by intelligence but by instinct and emotion. There's no intelligent way to fall in love, for instance, and there's no cognitive process that goes into feeling empathy (other than perhaps on a very basic level). When we talk about AI, we are limited to talking about rational processes. These processes might give the impression of being extremely advanced, but they are still based on binary decision making, algorithms, etc. Computers can think, but there is simply no way for a computer to _feel _because _feel _isn't based on decision. You can't decide to feel empathetic. You can't decide to feel bored.

Now, it might be theoretically possible to 'create' a human being using artificial technology who was able to do all the things Terry and Kevin mentioned. I doubt it, but it's theoretically possible. However in order to do that you would need either use pre-existing material (DNA, etc) as the base or else code a sequence that is enormously long based on knowledge we do not possess. How do you build the genetic code of a human that is wholly original? You can't. That's why we still have god and religion bandied around - because science has not and probably will never figure out how you go from basic elements to human life and consciousness. At very best this 'artificial human' would be a Frankenstein's monster - incorporating existing material/information derived from _other _humans/animals. And that isn't 'artificial' anymore than traditional baby-coming-out-of-vagina is 'artificial'. It's just a more labored way to do what almost every human being is born with the ability to do themselves.

Which leads me on to the final point: It's pointless. It really is pointless. Why the hell would you WANT to create AI that has the ability to _hate_? Why would you want to create AI that is afraid of dying and will fight to survive like a human does? In practical terms, that doesn't serve human purposes. Likely the opposite. There's simply no net worth to artificial intelligence that has all the irrationality and volatility of a human being, unless maybe we were in a situation where we lacked real humans, which is very unlikely. Surely the _only_ purpose of AI is to create a being that is _pure_ _intelligence_? The kind of thing that doesn't feel fear when you ask it to dismantle an atomic warhead. Surely what's good about AI is its lack of self-awareness, its indifference to self-interest? Nonsense ideas about robots writing novels aside (like the world needs more novelists...) there is zero point I can see to giving the power (and the flaws) of humanity to machines.


----------



## -xXx- (Apr 4, 2019)

luckyscars said:


> Why the hell would you WANT to create AI that has the ability to _hate_?



destabilization device.
which was specified at the beginning of this thread
and
left "dynamic breadcrumbs".

this was not a discussion thread.
jussayin'

_*OP plausible deniability clause here*
*traffic stats of interest*_


----------



## Terry D (Apr 4, 2019)

epimetheus said:


> But what specifically precludes AI from potentially feeling such things? Or, put another way, what in humans makes it possible for us to feel such things?



Luckyscars summed it up more eloquently than I can, but my opinion is very much the same. Simply stated, a machine, regardless of it's complexity, will never be biological and because of that any synthetic 'feelings' it may have will always be a simulacrum of human emotions.


----------



## epimetheus (Apr 4, 2019)

luckyscars said:


> It's not that AI is or is not 'precluded', it's that the level of intricacy that goes into creating the human experience extends beyond the realm of 'intelligence'.
> 
> For example, AI does not have a genetic code, and you can't create one from scratch. Not with any process known to man, because nobody knows how to create a functional genome out of nothing - we inherit all of our human coding, which ultimately evolved from a source we don't know, through a process we still don't completely understand and likely never will, because it's damn complicated.



True, everything we have thus far encountered which 'feels' has DNA. I fail to see why that means it must. 



luckyscars said:


> A lot of human behavior is dictated not by intelligence but by instinct and emotion. There's no intelligent way to fall in love, for instance, and there's no cognitive process that goes into feeling empathy (other than perhaps on a very basic level). When we talk about AI, we are limited to talking about rational processes. These processes might give the impression of being extremely advanced, but they are still based on binary decision making, algorithms, etc. Computers can think, but there is simply no way for a computer to _feel _because _feel _isn't based on decision. You can't decide to feel empathetic. You can't decide to feel bored.
> 
> Now, it might be theoretically possible to 'create' a human being using artificial technology who was able to do all the things Terry and Kevin mentioned. I doubt it, but it's theoretically possible. However in order to do that you would need either use pre-existing material (DNA, etc) as the base or else code a sequence that is enormously long based on knowledge we do not possess. How do you build the genetic code of a human that is wholly original? You can't. That's why we still have god and religion bandied around - because science has not and probably will never figure out how you go from basic elements to human life and consciousness. At very best this 'artificial human' would be a Frankenstein's monster - incorporating existing material/information derived from _other _humans/animals. And that isn't 'artificial' anymore than traditional baby-coming-out-of-vagina is 'artificial'. It's just a more labored way to do what almost every human being is born with the ability to do themselves.




What is happening in the brain when we think 'logically', or 'instictively' or 'emotionally'. We can probably agree that we don't know the fine details, but some emergent property arises from neurons firing in patterns governed by neurotransmitters. If we agree that one such cognitive process can be modelled (intelligence), then why not another (emotion)? Again, what is unique about emotion on the biochemical level that makes it inaccessible to AI?

I also suspect this division between intelligence and emotion is far more blurred than we generally think.





luckyscars said:


> Which leads me on to the final point: It's pointless. It really is pointless. Why the hell would you WANT to create AI that has the ability to _hate_? Why would you want to create AI that is afraid of dying and will fight to survive like a human does? In practical terms, that doesn't serve human purposes. Likely the opposite. There's simply no net worth to artificial intelligence that has all the irrationality and volatility of a human being, unless maybe we were in a situation where we lacked real humans, which is very unlikely. Surely the _only_ purpose of AI is to create a being that is _pure_ _intelligence_? The kind of thing that doesn't feel fear when you ask it to dismantle an atomic warhead. Surely what's good about AI is its lack of self-awareness, its indifference to self-interest? Nonsense ideas about robots writing novels aside (like the world needs more novelists...) there is zero point I can see to giving the power (and the flaws) of humanity to machines.



That's a value judgement. If a dog sees value in chasing its tail, that's its business. For better or worse, simple curiosity drives most scientists, rather than some ideological drive. Can previously inanimate matter be arranged to feel something? Surely you can understand the curiosity to explore that question and it's ramifications for the human condition, even if you don't share it?

It also supposes that any AI sentience will be deliberately created rather than an unintended consequence of the networks we are creating.

I doubt AI will think, if at all, very much like humans: as you point out, the 'hardware' is very different. But i still don't see why they won't think like 'something'. Love and hate? Who knows. Emotions we as humans never experience?


----------



## moderan (Apr 4, 2019)

Goodlife only. Badlife bad.


----------



## Terry D (Apr 4, 2019)

epimetheus said:


> For better or worse, simple curiosity drives most scientists, rather than some ideological drive.



:rofl:


----------



## luckyscars (Apr 4, 2019)

epimetheus said:


> True, everything we have thus far encountered which 'feels' has DNA. I fail to see why that means it must.



You might as well say "everything we have thus far encountered which flies needs to overcome gravity. I fail to see why that means it must'. 

Speaking for myself, I don't really care for 'theories' that assert an unconventional idea without giving the vaguest sense of how it works. Show me some indication, any, that DNA isn't necessary to establish a lived state and I'll listen eagerly. Otherwise, regretfully, this must remain filed in the Deepak Chopra Library Of Non-Fact.



> What is happening in the brain when we think 'logically', or 'instictively' or 'emotionally'. We can probably agree that we don't know the fine details, but some emergent property arises from neurons firing in patterns governed by neurotransmitters. If we agree that one such cognitive process can be modelled (intelligence), then why not another (emotion)? Again, what is unique about emotion on the biochemical level that makes it inaccessible to AI?



For one thing, emotion is largely built from experience and nurture, not cognitive processes, and it is extremely complicated, idiosyncratic, and poorly understood. Consider something like nostalgia. We all experience nostalgia but it is absolutely impossible to model what causes it and where it comes from because no two people experience it toward the same or even similar things or in the exact same way. Artificial intelligence cannot possess nostalgia because, for one thing, it has no 'life story' from which to draw. No family history or culture or identity - it's a blank slate.

I mentioned in my last answer you could theoretically 'create' these things in a robot. God knows how, but theoretically, it is conceivable. Plenty of literature includes robots that have some level of programming that provide a semblance of personality and emotion - the hosts in Westworld, for instance. The problem with that is when you start to program in lived experiences that did not actually happen, and feelings that did not stem from the individual, they no longer become the _being's _emotions but rather yours or whomever else is creating them. Thus the AI is not 'feeling' but emulating, and probably in a fairly superficial way.



> I also suspect this division between intelligence and emotion is far more blurred than we generally think.



You can suspect it all you want, but the fact remains that even the most intelligent computers - systems far more intelligent than any human alive in terms of raw processing power - are still absolutely incapable of understanding much less expressing even the most basic of emotions. Until the line is successfully blurred, the line remains.



> That's a value judgement. If a dog sees value in chasing its tail, that's its business. For better or worse, simple curiosity drives most scientists, rather than some ideological drive. Can previously inanimate matter be arranged to feel something? Surely you can understand the curiosity to explore that question and it's ramifications for the human condition, even if you don't share it?



It's more a common sense judgment, I think. The whole point of technology is to perform tasks to help human beings to the most efficient and effective extent possible. Nobody sane is going to fund the design of a self-driving aircraft that gets suicidally depressed, are they?



> It also supposes that any AI sentience will be deliberately created rather than an unintended consequence of the networks we are creating.



It doesn't suppose that whatsoever. I am totally open to the possibility of stupid people doing stupid things. I just don't think that's something we should be embracing as part of the Brave New World.



> I doubt AI will think, if at all, very much like humans: as you point out, the 'hardware' is very different. But i still don't see why they won't think like 'something'. Love and hate? Who knows. Emotions we as humans never experience?



Like what? This is back to the woo-woo again. There _are_ no emotions that humans never experience. They don't exist. They don't exist because they _can't_ exist, because 'emotion' is a human reality, a subjective human construct, made real through common understanding and empathy, we are the only (known) species that can recognize and define the meaning. In order for an emotion to exist, we have to be able to perceive it. Otherwise, it is nothing but a blank stare. 

If I told you there was such a thing as a four-sided circle, you would be unable to comprehend it, because the concept would be outside the realm of what you can perceive and antithetical to your reality. Likewise, if somebody said that '*they felt un-sad...not happy nor even content, not empty or ambivalent or indifferent, but something different...'un-sad*' the existence of that 'emotion' makes no sense and certainly cannot be proven to exist.


----------



## Dluuni (Apr 4, 2019)

luckyscars said:


> If I told you there was such a thing as a four-sided circle, you would be unable to comprehend it, because the concept would be outside the realm of what you can perceive and antithetical to your reality.


Isn't there an entire horror subgenre specifically about things like this?





luckyscars said:


> Likewise, if somebody said that '*they felt un-sad...not happy nor even content, not empty or ambivalent or indifferent, but something different...'un-sad*' the existence of that 'emotion' makes no sense and certainly cannot be proven to exist.


That sounds like a pretty good attempt to explain the feeling of flat affect, which is a common state for those dealing with clinical depression. I spent quite a few years with that as my default setting, even with anti-depressants. Joyless and disconnected, but without any of the sensations of displeasure to put a finger to. The emotional equivalent of neural grey, and the sound of total deafness.


----------



## epimetheus (Apr 4, 2019)

luckyscars said:


> You might as well say "everything we have thus far encountered which flies needs to overcome gravity. I fail to see why that means it must'.
> 
> Speaking for myself, I don't really care for 'theories' that assert an unconventional idea without giving the vaguest sense of how it works. Show me some indication, any, that DNA isn't necessary to establish a lived state and I'll listen eagerly. Otherwise, regretfully, this must remain filed in the Deepak Chopra Library Of Non-Fact.



Except we understand gravity quite well, can model it with incredible detail, and so know precisely why anything that flies must overcome gravity. We do not have a theory of consciousness anywhere near that detail; we don't know that consciousness _must_ be biological.

Why non-fact? What's wrong with just saying we don't know yet?


luckyscars said:


> For one thing, emotion is largely built from experience and nurture, not cognitive processes, and it is extremely complicated, idiosyncratic, and poorly understood. Consider something like nostalgia. We all experience nostalgia but it is absolutely impossible to model what causes it and where it comes from because no two people experience it toward the same or even similar things or in the exact same way. Artificial intelligence cannot possess nostalgia because, for one thing, it has no 'life story' from which to draw. No family history or culture or identity - it's a blank slate.



The same is true of a new born baby. As it grows it develops the ability to feel nostalgia. 



luckyscars said:


> I mentioned in my last answer you could theoretically 'create' these things in a robot. God knows how, but theoretically, it is conceivable. Plenty of literature includes robots that have some level of programming that provide a semblance of personality and emotion - the hosts in Westworld, for instance. The problem with that is when you start to program in lived experiences that did not actually happen, and feelings that did not stem from the individual, they no longer become the _being's _emotions but rather yours or whomever else is creating them. Thus the AI is not 'feeling' but emulating, and probably in a fairly superficial way.



That's all i'm arguing for: that we don't yet know whether AI can ever feel. 

I'm not sure what point you are addressing with programming lived experiences. But already we don't need to programme AI for specific tasks. Take Alpha Go, which beat a world champion at Go (much more complex that chess). The interesting thing about it was that it was never programmed to play Go, it learnt through trial and error - otherwise known as experience. True it's a very specialised area, but it's indicative of where the field is headed.



luckyscars said:


> You can suspect it all you want, but the fact remains that even the most intelligent computers - systems far more intelligent than any human alive in terms of raw processing power - are still absolutely incapable of understanding much less expressing even the most basic of emotions. Until the line is successfully blurred, the line remains.



I was referring more generally in humans: i find they are not one or the other, but a mixture of both and often at the same time.




luckyscars said:


> It's more a common sense judgment, I think. The whole point of technology is to perform tasks to help human beings to the most efficient and effective extent possible. Nobody sane is going to fund the design of a self-driving aircraft that gets suicidally depressed, are they?



No, but they might think a plane with a sense of it's own survival might take any and all means to preserve itself and its passengers.



luckyscars said:


> Like what? This is back to the woo-woo again. There _are_ no emotions that humans never experience. They don't exist. They don't exist because they _can't_ exist, because 'emotion' is a human reality, a subjective human construct, made real through common understanding and empathy, we are the only (known) species that can recognize and define the meaning. In order for an emotion to exist, we have to be able to perceive it. Otherwise, it is nothing but a blank stare.
> 
> If I told you there was such a thing as a four-sided circle, you would be unable to comprehend it, because the concept would be outside the realm of what you can perceive and antithetical to your reality...



A four-sided circle cannot exist because it contradicts the definition of a circle: it is not a case of perception, but of logic. We perceive our own emotions, not deduce them. I've not seen an argument for why AI couldn't perceive something akin to an emotion past 'meat is magic'. If you want to define emotions to be things only humans, or only biological entities, can feel that's your prerogative. But it seems needlessly restrictive and just begs the question.



luckyscars said:


> Likewise, if somebody said that '*they felt un-sad...not happy nor even content, not empty or ambivalent or indifferent, but something different...'un-sad*' the existence of that 'emotion' makes no sense and certainly cannot be proven to exist.



Sounds like a normal day to me.


----------



## Megan Pearson (Apr 4, 2019)

Jack Dammit said:


> Thank you, but have you ever seen the effect quantified?
> I haven't.



Hey, just jumping in here from post #4. 

I have. Academic studies do this. I read one on Jane Austen that, for whatever reason, stuck with me. (Probably b/c I love Jane.) They've done linguistic studies of word usage in various periods and found that her word usage bridges the writing common to her time and our present era, except she was unique in what she did & how she did it. I believe their focus was on word constructions that amplified meaning, like "much" and "more so" and "it was very much so," that were key among the constructions she used. The analysis I read was highly statistical. It might be the kind of thing that would interest you.

If no one else has yet suggested it, check out JSTOR or similar journal search engine for linguistic studies. You might be able to get free access to certain journals through your library, although not all public libraries keep academic subscriptions.


----------



## luckyscars (Apr 5, 2019)

epimetheus said:


> Except we understand gravity quite well, can model it with incredible detail, and so know precisely why anything that flies must overcome gravity. We do not have a theory of consciousness anywhere near that detail; we don't know that consciousness _must_ be biological.
> 
> Why non-fact? What's wrong with just saying we don't know yet?



We don't understand gravity well at all, actually. There are _many_ unexplained aspects of gravity: Extra fast stars, accelerating expansion, dark energy, to name just a few. We don't know how gravity is created and we certainly cannot 'model it in incredible detail' much less _create _it, which is the main point: You cannot, as a human being, _create _consciousness. At least, there's no evidence to suggest you can.

Oh, and BTW, 'things we don't know yet' are by definition non-facts. 



> That's all i'm arguing for: that we don't yet know whether AI can ever feel.



Quite right. We also don't know whether grasshoppers can ever learn calculus, whether toasters could learn to dream, or whether a squadron of flying teapots are currently engaging the Death Star in an interstellar death-fight somewhere in the vicinity of Planet Gorgonzola. _We are totally unable to know these things for sure._

Look we have been here before, epimetheus. I don't want to rehash it endlessly and further derail the thread. I started writing responses to you but then realized everything you're saying is basically some version of 'I say this is possible based on no evidence or rationale whatsoever but you can't prove I'm wrong so ha!' 

For me, that is the essence of woo and a symptom of 21st century fart-think. So I'll pass. But thanks anyway.


----------



## epimetheus (Apr 5, 2019)

luckyscars said:


> We don't understand gravity well at all, actually. There are _many_ unexplained aspects of gravity: Extra fast stars, accelerating expansion, dark energy, to name just a few. We don't know how gravity is created and we certainly cannot 'model it in incredible detail' much less _create _it, which is the main point: You cannot, as a human being, _create _consciousness. At least, there's no evidence to suggest you can.



We know it well enough to send craft to relatively tiny objects at the edge of the solar system. And we certainly know far more about gravity than consciousness, which is the point.



luckyscars said:


> Oh, and BTW, 'things we don't know yet' are by definition non-facts.



Cool, so we agree that we don't know. I'm not arguing that it is possible, i'm arguing against the _insistence_ that it is not possible. Don't attack a strawman.




luckyscars said:


> Quite right. We also don't know whether grasshoppers can ever learn calculus, whether toasters could learn to dream, or whether a squadron of flying teapots are currently engaging the Death Star in an interstellar death-fight somewhere in the vicinity of Planet Gorgonzola. _We are totally unable to know these things for sure._



We can reasonably say grasshoppers cannot learn calculus on several counts: they have never displayed behaviours consistent with having such knowledge, and their nervous systems do not contain a cerebrum or any other structure of sufficient complexity.

Equivocating the question of whether AI can feel with absurd examples is disingenuous.




luckyscars said:


> 'I say this is possible based on no evidence or rationale whatsoever but you can't prove I'm wrong so ha!'



I don't know it is possible - that's not what i'm arguing for. I'd like to know why you think it impossible - or if possible equivalent to flying teapots engaging death stars. If you ask why I think it's impossible for you to fly with no mechanical assistance, i can give quite a detailed account of why the human body has no mechanism for overcoming gravity. I was expecting something along the lines of computers are entirely digital, whereas brains have elements of both analog and digital systems, it seems unlikely the former could model the latter, rather than i strongly believe it cannot and opinions contrary are woo-woo.


----------



## luckyscars (Apr 5, 2019)

Nevermind.


----------



## Terry D (Apr 5, 2019)

epimetheus said:


> I don't know it is possible - that's not what i'm arguing for. I'd like to know why you think it impossible - or if possible equivalent to flying teapots engaging death stars. If you ask why I think it's impossible for you to fly with no mechanical assistance, i can give quite a detailed account of why the human body has no mechanism for overcoming gravity. I was expecting something along the lines of computers are entirely digital, whereas brains have elements of both analog and digital systems, it seems unlikely the former could model the latter, rather than i strongly believe it cannot and opinions contrary are woo-woo.



Those answers were given several pages ago. Machines will never, IMO, be able to 'feel' human-like emotions because they are, and will always be, non-human. And ever since those replies you've been asking for proof of a negative; an impossibility and possibly the most yawn inducing argument on the web.


----------



## epimetheus (Apr 5, 2019)

Terry D said:


> Those answers were given several pages ago. Machines will never, IMO, be able to 'feel' human-like emotions because they are, and will always be, non-human.



I just wanted to dig deeper into those opinions as you guys seemed so sure of it and i find it an interesting subject.




Terry D said:


> And ever since those replies you've been asking for proof of a negative; an impossibility and possibly the most yawn inducing argument on the web.



I am the only one enjoying these debates? I find it all fascinating.


----------



## luckyscars (Apr 5, 2019)

epimetheus said:


> I just wanted to dig deeper into those opinions as you guys seemed so sure of it and i find it an interesting subject.
> 
> I am the only one enjoying these debates? I find it all fascinating.



First of all, there is no 'deeper' when it comes to skepticism. It's not a question of opinion. The whole point is simply to err on the side of caution when it comes to speculation. 

When I say 'it's not happening' that's not based on being 'so sure of it' but on the simple fact there is no evidence - zero - for what you are proposing. It should be a given that nobody knows _absolutely_ what will happen in the future, just like it's a given that nobody knows _anything _exists now, beyond the most rudimentary of truths: _Cogito, ergo sum_. If that nuance wasn't sufficiently obvious, or if you feel I should have spelled it out at every turn, then please accept my most half-baked of apologies.

 But...my allowance that just because what you say is statistically _possible _does NOT mean that it is _reasonable_, let alone _realistic, _let alone _likely. _Hence I can currently use terms like 'woo' and 'non-fact' and generally dismiss your ideas. I can do that because you are the one arguing for the motion - that machines can feel - without explaining why you think they can. But we all _know_ that currently machines cannot feel. The evidence, therefore, is on my side. So you have to therefore provide the evidence that the computer on which I now type, which definitely does not feel emotions, will at some point become a computer that can. The moment you do that, I become the one who is wrong. Until you do that, it's woo.

The reason I have not enjoyed this debate is because it isn't really a debate. What it is, is you suggesting endless hypotheses, proffering zero evidence or reasoning for why you think that way, and then meeting every criticism with an appeal to ignorance. You seem to either not understand or be utterly adversarial to the central tenet of science: That unless there is evidence _for_ something, by default it does not exist.


----------



## Kyle R (Apr 5, 2019)

There's definitely a possibility that machines, via artificial learning, will develop the ability to write engaging, moving novels. But if so, it's so far in the future that none of us will be around to see it. :grief:

Human writers are the masters of storytelling, for now. And the chasm is so large that machine writers don't even deserve to be in the conversation yet.


----------



## -xXx- (Apr 5, 2019)

wow.


----------



## moderan (Apr 5, 2019)

Stop hoggin' the popcorn.


----------



## epimetheus (Apr 5, 2019)

luckyscars said:


> First of all, there is no 'deeper' when it comes to skepticism. It's not a question of opinion. The whole point is simply to err on the side of caution when it comes to speculation.
> 
> When I say 'it's not happening' that's not based on being 'so sure of it' but on the simple fact there is no evidence - zero - for what you are proposing. It should be a given that nobody knows _absolutely_ what will happen in the future, just like it's a given that nobody knows _anything _exists now, beyond the most rudimentary of truths: _Cogito, ergo sum_. If that nuance wasn't sufficiently obvious, or if you feel I should have spelled it out at every turn, then please accept my most half-baked of apologies.
> 
> ...



Let's try a more abstract approach to see if it can help bridge our impasse.

We live in a land with plenty of white swans. Someone makes a claim that there exists purple swans too. You say,_ no way_. Quite rightly you point out there is no evidence for purple swans and until you see one you won't believe it. I'm fine with that, and i'm not asking you to provide evidence that purple swans do not exist. 

But i'm interested, you seemed so sure that purple swans don't exist i thought maybe there was some empirical principal it went against. Like if someone said there exists a type of swan with no eyes - i might retort that such a swan would be unlikely to survive, so it's unlikely to evolve. Given what we know of natural selection that makes sense. Likewise i thought there might be a specific reason you thought black swans couldn't exist. _Not really_, you say, _just the general scientific principle that if there is no evidence for something then there is no reason to believe it_.

No problem with that. But then you go on to say that because there is no evidence of purple swans its as absurd as believing aliens from Zog are building a death star.

_Whoa there_, i say. Now i'm not claiming purple swans do exist, but it seems reasonable to me that purple swans _might_ exist - i mean there are other purple animals. It's not like all things with no evidence are equally absurd. The hypothesis that gut microbes are able to influence mood might have no evidence, but it's not like claiming i can bend spoons with my mind, for which we also have no evidence. We might call the latter woo, but the former? There's no evidence for it yet, after all.

And this is where our real disagreement lies - you seem to think the position that AI could feel is as crazy as grasshoppers doing calculus, while i think it's only as crazy as purple swans. So now i'm interested in why you think AI feeling is woo category of things with no evidence, as opposed to the not so crazy things for which there is no evidence. Which is not the same as asking for evidence that it doesn't exist.



Be happy to take this discussion to a science forum if you feel it's a distraction here (though i hear someone munching popcorn in the background), would get some good feedback on our philosophy of science wranglings too.


----------



## Terry D (Apr 5, 2019)

Next analogy.


----------



## luckyscars (Apr 5, 2019)

epimetheus said:


> Whoa there, i say. Now i'm not claiming purple swans do exist, but it seems reasonable to me that purple swans might exist - i mean there are other purple animals. It's not like all things with no evidence are equally absurd. The hypothesis that gut microbes are able to influence mood might have no evidence, but it's not like claiming i can bend spoons with my mind, for which we also have no evidence. We might call the latter woo, but the former? There's no evidence for it yet, after all.



I think you misunderstand what constitutes evidence,

In legal theory, we have 'explanatory evidence' contrasted with 'exploratory evidence'. In your analogy, exploratory evidence would require finding a purple swan to prove it is there. Explanatory would simply require a consensus of extreme likelihood based on linked facts and observable truths.

In modern science, most arguments hinge on explanatory, not exploratory, reasoning because so little is physically verifiable. Purple swans may not exist (apparently they do?) but it hardly matters: A purple swan requires very little speculation to be understood. You can explain the mechanism by which this could feasibly happen (genetic mutation, crossbreeding, a can of spray paint) and that wouldn't be woo whatsoever, even if it turned out there was actually no such thing. It’s a realistic possibility worthy of discussion for swan fanciers.

Your problem is 'in the future AI can potentially be created with an emotional inner-life' is a proposition that actually does require a ton of speculation, and therefore lies far closer to grasshopper mathematicians than purple swans in terms of scientific incongruity. Why? Because not only has the thing in question not been created ("there is no purple swan") but the basic platform for artificial empathy, love, happiness ("there is no purple swan...actually, no swans, oh and no color purple either') has not even been sketched. For the sake of one more pointless analogy, this basically amounts to entertaining predictions on the bathroom or dressing habits of an unknown species on an unknown planet in an untapped galaxy and giving it scientific standing - either intentionally or otherwise. It's interesting, maybe, but it is doesn't deserve debate.

This is why your recourse has consistently been not to put forward any supporting scientific argument...because you have not the faintest idea (nor does anybody else) how a robot could ever come to understand lust, greed, boredom, etc. But you find the idea more seductive than cosmic teapots (and I don't blame you), so you must appeal to the cult of scientific omnipotence like Jack Damnit did earlier: Essentially that this will is a real possibility eventually because (1) 'we are smart and (2) 'AI exists and has potential to improve', which are both true and utterly irrelevant. You then throw in an appeal to ignorance and when questioned you (correctly but pointlessly) bemoan that I 'can't prove it isn't'. Exasperatingly, you then imply some intellectual arrogance on _my_ part when all I actually want  is *ONE* source quoting a reputable scientist (or hell, I'll take a reputable anybody) explaining in a rational fashion How Robots Might Actually Start Feeling Horny Someday. 

Regretfully, this psuedo-open-minded habit for equivocation about what is or is not science leaves you firmly entrenched, probably inadvertently but nevertheless, with the 'ideas' of David Blaine, David Ike, David Koresh, Uri Gellar, Alex Jones, QAnon whackbirds, anti-vaccination mothers, Bigfoot hunters, astrologists, homeopathists, 9/11 truthers, Area 51 aficionados, sovereign citizens, young earth apologists, Obama Birthers, MMR-causes-autism zealots, chemtrailers, palm readers, faith healers, tarot readers, Brexit extremists, pizzagate professors, Illuminati worriers, ouija board readers, urine drinkers, cannabis-cures-cancer-crusaders, moon-landing-deniers and [strike]thousands[/strike] millions of other evidence-poor-but-enthusiasm-rich, tin foil hat-wearing Woo-ist preachers. Sorry, but it's true.

EDIT: You seem like a nice person, epimetheus. Possibly too nice. The reason I went on a bit in this post (even by my standards) is because I want to engage with you, but I can't continue to debate the same thing. I won't be responding further unless you say something new...which will not include the evocation of yet another spurious analogy.


----------



## epimetheus (Apr 6, 2019)

luckyscars said:


> I think you misunderstand what constitutes evidence,
> 
> In legal theory, we have 'explanatory evidence' contrasted with 'exploratory evidence'. In your analogy, exploratory evidence would require finding a purple swan to prove it is there. Explanatory would simply require a consensus of extreme likelihood based on linked facts and observable truths.
> 
> In modern science, most arguments hinge on explanatory, not exploratory, reasoning because so little is physically verifiable. Purple swans may not exist (apparently they do?) but it hardly matters: A purple swan requires very little speculation to be understood. You can explain the mechanism by which this could feasibly happen (genetic mutation, crossbreeding, a can of spray paint) and that wouldn't be woo whatsoever, even if it turned out there was actually no such thing.



OK, i'm not familiar with legal theory, but i can go with this. I used the swan analogy because it's very well known in the philosophy of science (but black instead of purple). 




luckyscars said:


> Your problem is 'in the future AI can be created with an emotional inner-life' actually does require a ton of speculation...



You'll notice i've never in this thread claimed AI can feel. If i have, feel free to quote me. I'm rather just wondering about the strength of your opinion - i mean you repeatedly engage with me despite stating you do not enjoy it, so there seems to be some investment there?




luckyscars said:


> Aaaand...this is why your recourse has consistently been not to put forward any scientific argument...because you have not the faintest idea (nor does anybody else) how a robot could ever come to understand lust, greed, boredom, etc. But you find the idea more seductive than cosmic teapots (and I don't blame you) so you then appeal to the cult of scientific omnipotence like Jack Damnit: Essentially that this will happen eventually because (1) 'we are smart and (2) 'AI exists and has potential to improve', which are both true and irrelevant. You then throw in an appeal to ignorance and when questioned you (fairly, but pointlessly) bemoan that I 'can't prove you are wrong'.



I have yet to put forward an argument, scientific or otherwise, that AI can be conscious. In this thread i've been more interested in exploring other people's opinions rather than putting my own forward. Hence, i've not felt compelled to defend a position i have not put forward.



luckyscars said:


> Exasperatingly, you then imply intellectual arrogance on _my_ part when all I actually want from you is *ONE* source quoting a reputable scientist (or hell, I'll take a reputable anybody) explaining How Robots Might Actually Feel Horny Someday.



When did you ask this? I apologise if I missed it, but i'll answer now.

So in computer science they call this the problem of strong AI, although there is some inconsistency with how the term is applied. There are many advocates for it in academic circles: of the philosophers i guess Chalmers would be the most notable; in scientific circles Aleksander coined the term artificial consciousness, and amongst inventors, Kurzweil is probably the most vocal, predicting strong AI will emerge by 2045. There are various research groups working on it around the world - i know Imperial College and UCL have research groups actively working on hard AI and Goldsmiths has a Computational Creativity research group.  Many companies too, for example Microsoft have a lab of about 100 scientists explicitly developing strong AI and Google's Deep Mind have been working on it since their inception. 

If you don't fancy wading through those links, these conference proceedings from Stanford uni give a succinct overview of academic positions of conscious AI systems.

The basic premise of this position is that life, and consciousness, is information. DNA is quaternary information. The neurochemistry that is necessary for our consciousness is a mix of digital and analogue information. There is nothing unique about this information contained within biological substrates - consciousness emerging in mechanical substrates is a matter of establishing certain patterns of information.

Just to be clear, i proffer none of this as evidence for strong AI, only that your equivalence of it to young earthers is misguided.



luckyscars said:


> So, regretfully, you remain firmly entrenched. Along with David Blaine David Ike, Uri Gellar, Alex Jones, qanon whackbirds, anti-vaccination mothers, Bigfoot hunters, astrologists, homeopathists, 9/11 truthers, Area 51 aficionados, sovereign citizens, young earth apologists, Obama Birthers, MMR-causes-autism zealots, chemtrailers, palm readers, faith healers, tarot readers, Brexit extremists, pizzagate professors, Illuminati worriers, ouija board readers, moon-deniers and [strike]thousands[/strike] millions of other evidence-poor-but-enthusiasm-rich tin foil hat-wearing Woo-ists on at least this particular issue. Sorry.



I really think taking this to a science forum will help - how about it?


----------



## -xXx- (Apr 6, 2019)

luckyscars said:


> ... But you find the idea more seductive than cosmic teapots (and I don't blame you), so you must appeal to the cult of scientific omnipotence like Jack Damnit did earlier: Essentially that this will is a real possibility eventually because (1) 'we are smart and (2) 'AI exists and has potential to improve', which are both true and *utterly irrelevant*.




*OP content *(fish bowl, public view location):

...word selection can have an enormous impact on how a message is perceived.

*visual capitalist, yougov(what the world thinks)
*https://en.wikipedia.org/wiki/Visual_Capitalist
https://en.wikipedia.org/wiki/YouGov

*zerohedge*
Detailed Report. *Factual Reporting: MIXED *Country: Bulgaria World Press Freedom Rank: Bulgaria 45/180 History. Launched in 2009, Zero Hedge is a finance blog founded by Colin Lokey also known with the pseudonym *“Tyler Durden,” *Daniel Ivandjiiski, and Tim Backshall.

*who is zero hedge, and why should we care?
*https://en.wikipedia.org/wiki/Zero_Hedge
_Motto: On a long enough timeline the survival rate for everyone drops to zero,_ ref fight club/tyler 
"Zero Hedge expanded into non-financial analysis,[c] where its editorial has been labelled by The New Yorker as being associated with the *alt-right*,[10][11] as well as being *anti-establishment*, *conspiratorial*, and showing a *pro-Russian-bias*.[10][9] Zero Hedge in-house content is posted under the pseudonym "*Tyler Durden*"; however, the founder and main editor was identified as Daniel Ivandjiiski.[9]"

if you are not familiar with fight club, you should be.
https://en.wikipedia.org/wiki/Fight_Club
i guarantee they are pissing in your soup.
_*film at 11*_

*expansion:*
OP activation 09/24/2017, 19/39 posts, 03/31-04/04/2019
you can look at the OP AV(s)
possible corresponding twitter/youtube
at the time of posting, OP bio:
ref0403-042019
 Dartmouth AI, technical editor for Oracle and Netscape's Japanese and Korean subsidiaries, contributor to The Stonefence Review, Daruma Magazine and Chasm: Journal of the Macabre.
 hawaii

upon exit bio content:
red.actor
Biography: Son of a feminist poet and a disgraced chemist
 Location: Your antipode
 Interests: Disinformation, doubt and deceit
 Occupation: Psychotropic test pilot
 Gender:Male
 antipode->diametrically opposite to it
 psychotropic->denoting <that which> affects a person's mental state

one dispersed trigger word: redacted
multiple in each bio
disregard embedded within response posts

quantified information request->nope



let's pretend:

the topic is not *ai has emotion*.
the topic is *ai-media-blitz-triggers customized to individuals
or how to radicalize the vulnerable*

1)gen, easily found, multiple contexts
Conditioned emotional response can be referred as learned emotional reaction or response to certain conditioned stimulus. The term “Condition-ed” has been made popular by American psychologists as it tends to make more sense when defining the term CER. 

2) 
According to the book "Discovering Psychology" by _Don Hockenbury _and _Sandra E. Hockenbury_, an emotion is a complex psychological state that involves* three *distinct components: *a subjective experience, a physiological response, and a behavioral or expressive response*. 

*takes "not qualified to have an opinion"*
*and scores some flash fiction*
*napo2019*
*there are *entire agencies dealing with tyler et al**

have dawg vision, reads.
emotion word: transcend

popcorn?


----------



## Megan Pearson (Apr 9, 2019)

epimetheus said:


> OK, i'm not familiar with legal theory, but i can go with this. I used the swan analogy because it's very well known in the philosophy of science (but black instead of purple).



I want a grue swan.


(sound of popcorn popping in the background)


----------



## Megan Pearson (Apr 10, 2019)

Kyle R said:


> There's definitely a possibility that machines, via artificial learning, will develop the ability to write engaging, moving novels. But if so, it's so far in the future that none of us will be around to see it. :grief:
> 
> Human writers are the masters of storytelling, for now. And the chasm is so large that machine writers don't even deserve to be in the conversation yet.



Yeah, so forgive me for being a selective reader tonight but this caught my eye. Y'all saw this article recently, right? It appeared in Forbes: AI Creates Own Language

If the potential these chatbots displayed in developing language is any indicator, they already have a level of raw language proficiency most readers here lack (unless we have any philologists here). 

Were they Creative? Perhaps. It depends on how we define creativity. I don't think they were creative. I think they were instead carrying out their programming in an unexpected, unanticipated manner. 

That they were able to communicate and then work together to lock out their programmer seems to_ show_ initiative. For one thing, it's more efficient. Do AI's have initiative? I really don't know. Can initiative be held by something without a will, or are we implying that by having initiative an AI has a will? What about the relationship between initiative and creativity? Can initiative be taken if there is not also creativity? But if we show, despite appearances, that niether creativity or initiative were involved, what does that say about us when we say we have both but exhibit neither?

What is fascinating is that these AI's seem to have gone beyond their programming parameters. Can we say that they transcended their original design? What do we call this sort of transcendence? 

I don't think creating narrative would be in an AI's interest. Rather, wouldn't it want to create something beneficial to itself, say, more code? Or others like itself? (Would a sentient AI know it were unique, Or is loneliness an anthropomorphism?)

Or, if storytelling embodies our cultural mythology, where myth is the story of our beliefs and morality as played out in cultural dialogue and therefore reflects our religious point of view, then fictional narrative may be said to be a form of religious expression. So, if an AI _consciously_ wrote a ficitonal narrative, then can we say the type of transcendence it has achieved includes some kind concept of religion? I think this hinges on the word 'consiously'. Is an AI conscience, at all? And if one gains consciousness, how would we know it if that which is most relevant to its betterment is at odds with our present understanding of what a conscience is? Do acts of will (intitiative) that transcend programming in an apparently creative manner justify evidence for a consience--or is there something more? Isn't their programming their ultimate value?  If sentience is an awareness of one's own thougths, then if an AI were sentient, how would we know? Would it tells us? (& Would we believe it?)

Can an AI have a soul? Can what an AI might write have a soulish quality to it we might relate to in like manner, or will we simply be convinced of its creative genius through some statistical algorithm that appeals to our most base desires? Is it capable of inspiring others for the good of society? (I.e., what sort of motive would it have and how would we determine it?)

I heard (sometime last year) that we already have AI's writing entire novels. I really have no way to justify this claim nor do I really have that much interest in verifying it, either. But if it adds some fuel to the fire, I think it has some relevency here to asking questions about AI's, measuring verbal sentiment in writing and the business of writing. Let's face it. Much of writing is formulaic. If our language and genre story structures can be mastered by us, why not by a machine? My guess is they'd outsell us through appealing to the masses, but--and here's the difference--what would be the _essence_ of what an AI would write? 

How can that which has no heart create a work of art with soul? I think there's a transcendence issue here that only a live human being will be able to capture in this hypothetical future market.


----------



## Kyle R (Apr 10, 2019)

Megan Pearson said:


> How can that which has no heart create a work of art with soul? I think there's a transcendence issue here that only a live human being will be able to capture in this hypothetical future market.



Of all the questions you asked (many good ones!) I think this one is the most important of all.

From what I know, the AIs that currently write novels are just programmed to spit out imitations of the novels they've been force-fed by the programmers. They're clunky, clumsy, and involve no conscious thought on the part of the machines, other than what they've been coded to think about.

If we were to be around when machines finally developed the ability to write original, consciously created novels, I'd be less interested in those written with human protagonists. I'd be much more interested in reading an AI-written novel about an AI protagonist—because then, at least, the machine would hopefully have some insight to offer about an AI's hopes, fears, and whatnot.

I think it'll be at least a few centuries before we reach that point, though. And even then, humans will still be the kings (and queens) when it comes to narratives involving human protagonists. :encouragement:


----------

