# An examination of A.I. and E.I. using Ex Machina and other movies as a reference.



## Guard Dog (Sep 21, 2018)

Removed by author.

G.D.


----------



## Theglasshouse (Sep 21, 2018)

> And speaking of that, here's something the movie touched on, but didn't really elaborate on: the mechanics of an artificial brain, and why I doubt any computer like we have today will ever achieve any sort of real intelligence, no matter how complex the program is. Here is a section taken straight from the movie script:



Science fiction movies are somewhat rare when I see them. I saw a good while ago interstellar and inception. I never saw the last Terminator movie, well I saw it briefly on tv when it aired.

There are trying to build an artificial human brain for research but it would require money such as a space program would need (billions). To see how the human brain works. It's either a huge waste of money for a university or the technology needs to be researched further. I looked it a year ago on the internet when I needed to do research for a story.

Good post though and I think it is very helpful for inspiration. It gives clues on the right terms. It should give writers important information on the different intelligence and to help inspire them on how to write their story. 

I have yet to see the movie that depicts a character in love with a software program. Anyways AI intelligence then is the term, probably for the human brain. Since it seeks to simulate the human brain and explain how it works. My brother enjoyed that movie which may be based on the story space odyssey I believe.


----------



## fschutt (Sep 21, 2018)

I don't think that artificial intelligence can ever become concious, for three simple reasons:

- Machine learning / Neural networks can only remix existing data, never create original works. You can train a NN to write like Shakespeare or to predict a word based on the last 10.000 ones - but you can't teach it creativity, to write Hamlet or Pride and Predjuce without any training. A machine cannot be creative. This is also a problem I see with these android robots - "Oh they say X, so it must be true" - yes, because we humans trained them to say X. AI doesn't question the purpose of its own existence, because that would require creativity and thinking about things that it was never told about. Sure you can train a NN to produce philosophical quotes - but then again, it wouldn't think for itself, it would just remix patterns of input data.
- There are mutations (i.e. a program remixes some data) that could be considered "creative, original works", but the pure chance of an algorithm creating such a work is infeasibly low (you need to train NN for **hours** until they produce anything resembling the output) - and even then, it's purely by chance that such a work gets created, there is no design behind it. One wrong paintstroke and the whole thing falls apart - a machine doesn't think about what it's doing, it just replicates and applies patterns randomly and rates them by their resemblence to features in the input data. It's less design and more "throw numbers at the wall and see what sticks" - except that a program has to throw lots (and by that I mean trillions of numbers) to get one good result - often with a lot of engineering behind it to avoid overfitting. The chance is simply too low.
- The most important difference between a concious and an artifical intelligence is the concious recognition between good and bad (which steers any design process and life in general). We humans have a clear design and goal to improve ourselves and our writing, a machine just throws random numbers around. Without any guidance (the rating step in learning networks), all a machine can produce is noise and static, not anything meaningful work.

You can train a NN to do good and a NN to do evil. A machine doesn't have morals, it doesn't care what it does. So while I like the idea of sentient machines for entertainment purposes (i.e. Terminator), I don't think it can ever happen that an AI becomes sentient. It's like a CD record - if you play back a song, it doesn't make the CD player creative because it plays a creative song, it's just the machine repeating or echoing data that was given to it. Sure there can be evil computer viruses in the future, but those don't think for themselves either, they have a (usually genius) designer behind them and usually a very limited set of actions that they can do.

However, any sufficiently advanced technology is indistinguishable from magic. You can fake conciousness. Ex. If I hard-coded a program "reply with 'Here's your coffee' if the question is 'Could I get a cup of coffee?'" - to a user of the program it seems like a concious, human-like decision to say that, but internally the program is just playing back what its designer told it to. All that machine learning does is to fake it better, but it's still fake in the end and at a certain boundary this fakeness always breaks down.

So I think that concious machines are a fantasy and not reality. And yes, I do think that bio-genetic components play an important role: because of information density. When you have a DNA string, a single bit can be encoded in a molecule, whereas in computers, you need a much larger silicon or metal component to only store a single bit. I would also not be surprised if we, in 100 years had CPUs that are based on like cell-like structures, simply because transistors currently approach the boundary of physics, they can't get any smaller than a few nanometers due to electric interference between components. But that is more a fantasy, who knows. But "wetware" instead of "hardware" could be a reality.


----------



## fschutt (Sep 21, 2018)

Theglasshouse said:


> I have yet to see the movie that depicts a character in love with a software program.



Her, 2013: https://www.imdb.com/title/tt1798709/


----------



## Ralph Rotten (Sep 21, 2018)

I have a sentient AI in the book I am releasing in October. In that book one of the characters explains that even human babies are not sentient until they are roughly 18 months old, and that a truly sentient AI has to be grown just like a human baby.

Then during an argument between another character and the AI, ALXS points out that the human nervous system is just a network of nerve bundles tied together in cohesive fashion (and that our DNA works suspiciously like the Windows registry.)

You can write all the clever code you want, but it will not be true AI until it is shaped by inputs and experiences.


----------



## Jack of all trades (Sep 21, 2018)

My two cents : 

"Artificial" is an accurate term, as the intelligence is not natural, i.e., biological.

Is artificial intelligence possible? We keep creeping closer, but there is still a loooooong way to go. GPS gets confused nearly every time, although usually figures it out eventually. Auto-correct often chooses the wrong word, but also gets the actual intended word a surprising amount of the time.

Is gel superior for creating computer "brains"? I'm not convinced of that. But it sounds cool.

I think a computer can be programmed with firmer values than many people have. Killing, for example, simply wouldn't be an option.

In order for there to be true intelligence, an ability to write or modify programs or subroutines would have to be part of the original programming. Something more advanced than mere data storage.

If one thinks about how the brain works, how choices are made, how learning is accomplished, then artificial intelligence could be closer to really existing. But I doubt those who want to create artificial intelligence value biological intelligence enough to do the analysis.

I think human, and animal, sentience begins very early. I have some memories from babyhood, including having my umbilical cord end cauterized. I must have been a week or two old. I knew the doctor was a separate person. Self-awareness is one criteria of sentience, I believe.

Anyway, those are some of my thoughts on the subject, which is a pleasant break from the word choice and sentence structure threads that appear so often here.


----------



## Guard Dog (Sep 21, 2018)

Jack of all trades said:


> ...In order for there to be true intelligence, an ability to write or modify programs or subroutines would have to be part of the original programming. Something more advanced than mere data storage...



I agree. 

But I also believe that just as a biological brain/intelligence does, it will need to be able to change the "hardware" as well as modify the program.

Our brains change their own architecture on an "as needed" basis... something not even our best super-computer can do yet. 


G.D.


----------



## Guard Dog (Sep 21, 2018)

fschutt said:


> You can fake conciousness.



Yeah I know; I was married for over 30 years. 8-[  :icon_cheesygrin:


G.D.


----------



## Guard Dog (Sep 21, 2018)

Jack of all trades said:


> ..."Artificial" is an accurate term, as the intelligence is not natural, i.e., biological...



Okay, a question for you then; If someday a way is discovered to remove a person's mind... their intelligence and personality... from their biological brain, and transfer it to a mechanical device either temporarily or permanently... Does that then make them an "Artificial Intelligence"? Even if they are functionally unaltered? 

In the story I'm working on, I have several classes or types of synthetic people, along with the usual biological ones.
They range from exact copies of biological humans, only made from synthetic materials that are superior to the "squishy" stuff that we're all made of, to analogs of a humanoid body, with improvements to the designs of some systems... And all of these _are_ people, even though they were engineered and constructed in a lab somewhere, for a particular purpose.



G.D.


----------



## Jack of all trades (Sep 22, 2018)

I wonder if you have considered the emotional, social, and political ramifications of brain transference. Even a brain transplant into another human would be HUGE. Into a mechanical being? Not something I would be interested in.

Do I understand correctly that you want to know if I would consider a transfer of brain contents to a mechanical "brain" artificial intelligence?

I'll answer the question as I understand it. No, I would not consider that to be artificial intelligence. Nor would I consider it biological intelligence. I would want a new word or phrase to describe that. Mechanical intelligence? Or something similar, maybe.

The people in your book sound an awful lot like (potential) slaves. Are they?


----------



## Jack of all trades (Sep 22, 2018)

Guard Dog said:


> I agree.
> 
> But I also believe that just as a biological brain/intelligence does, it will need to be able to change the "hardware" as well as modify the program.
> 
> ...



It's funny you should bring up brain altering. It wasn't all that long ago that the science declared the brain to be static -- unchanging from birth.

While brain growth and "pruning" of pathways has become accepted in science, I'm not sure the artificial "brain" would need to be able to physically alter itself. 

Adding storage to mimic brain growth, perhaps. But as that is physical, it cannot be programmed as a possibility. 

And while "pruning" seems to occur, older people tend to recall old memories better than new. I wonder if anyone has studied this to learn if "pruned" pathways return. I wonder if there's just a stop sign, of sorts, instead of a road demolition.

Computers currently have the ability to mark a section of storage as "deleted", without destroying the data. That's why deleted files can sometimes be recovered. After a time, of course, that "deleted" section gets used, making old data recovery unlikely.

Do you have a programming background?


----------



## JustRob (Sep 22, 2018)

Some time ago our next door neighbour read several chapters into my novel ... very slowly. She told my angel that she was trying to get inside my mind to understand how I thought, but my angel told her that to attempt that was madness, so she gave up reading it. Later she gave me a book entitled _Is Your Neighbour A Zombie?_ claiming that it was sent by a book club and she didn't want it, but I did wonder about the gift. I haven't read much of the book, which is about philosophical puzzles, but the titular issue was how a man could persuade his neighbours that he was actually a conscious human being and not just a zombie imitating one. You can virtually substitute the word "psychopath" for "zombie" to examine the problem at another level. I have worked with computers for over half a century and they are literally all psychopaths, i.e. soulless, to interpret that word literally.

So far nobody has discovered what consciousness is, where it comes from and what purpose it serves. One theory is that it is an effect within the quantum states of the brain's molecules rather than the electrochemical activities of the neurons. If this is true then even a very powerful computer couldn't precisely emulate or store the engrams of a specific brain although it might be able to give a convincing impression of doing so, i.e. be a zombie.

Fundamentally we are considering the nature of the Turing test here, a hypothetical test that could determine whether a set of responses came from a computer or a human. I think there are people who experiment with this research in terms of developing both AI and the test itself. Alan Turing made the point that any such test would have to take into account that a computer emulating a human would make mistakes just as a human would, just as an example of the necessary nuances of the problem.

Single neurons can be very complex structures and the brain contains a vast number which all operate simultaneously, so even though it works very much by trial and error it appears to be organised. Neurologists know that the brain is actually a big disorganised mess that came about by a series of evolutionary accidents, nothing more. It also works by majority analogue logic rather than digital logic, so every action occurs on the balance of a large number of factors. Also it isn't possible to determine exactly where the thoughts that we have reside within its activity. Neurons pulse in response to many inputs but is it the fact that a pulse occurs, how often it occurs or when it occurs that defines the information? Some neurons just seem to keep on pulsing regularly. What are they doing and why? If we can keep on learning new things what are the neurons that will record all that new information doing now? How would a computer know which parts of the brain were doing anything useful and which weren't, or is everything that happens within the brain important? Is creativity nothing more than the random activity of neurons that haven't been given anything more specific to do yet? 

Neurologists talk about the brain being "plastic" meaning that any part  of it can do anything if necessary and it wll work out what inputs mean  regardless of where they come from. Cochlear implants proved this when  they were introduced because they simply send electrical impulses to the  brain somewhere near the auditory system and the brain eventually works  out for itself that these impulses are associated with the sounds that  it can't hear through the normal channels.

In order to emulate a human brain one would need to know exactly how it works, but nobody does. On the other hand it is possible to create artificial intelligence that doesn't aim to emulate the behaviour of a human mind, but in that case how it would behave would depend on its motivations. That's why science fiction stories on the subject revolve around the precise objectives given to AI's and the consequences when their behaviour diverges from human expectations. As humans don't have any clear ideas about what their own objectives actually are they are hardly the ones to give the AI's their objectives, are they? So, how can we possibly decide whether an AI is "intelligent" when we inevitably measure that concept by reference to what we ourselves see as intelligence?

There's no point in bringing the idea of free will into this discussion either because the basic scientific principles of chronological cause and effect lead to the conclusion that even in humans free will is nothing more than an illusion. Personally I've found a spoiler for that line of thought though.

In my solitary novel I created a situation that parallels the problems involved with understanding the human brain in a simpler fashion. In the story there was a peculiar supercomputer that was partly analogue and partly digital. The digital part was reasonably well understood but the analogue part was physically just large arrays of general purpose circuits and what they actually did depended on the electrical charges and currents circulating around them. This can be compared to a digital computer where the hardware specification is understood but nobody knows what the software is doing. One simply cannot analyse such a system without some sort of starting point and the inputs and outputs alone don't provide a clear enough one. Probing the hardware only supplies information about what it is doing but not why, just as probing neurons may give clues about which areas of the brain are active when certain thoughts occur but not why. One might as well poke around inside a TV or smartphone to find out where all the pictures and voices are hiding in it. 

The bottom line is that the closest thing that we have to an AI at present is the entire Internet and look how dumb that is. Increasing storage, versatility and processing power just increases the probability of neuroses, just as in a human brain, so what's the point of doing it anyway? For example, when stock exchange trading was computerised it went berserk, processing its own data cyclically faster and faster in an attempt to satisfy every trader's impossible demands at the same time. Maybe it still is doing that; I don't know.


----------



## Guard Dog (Sep 22, 2018)

Jack of all trades said:


> I wonder if you have considered the emotional, social, and political ramifications of brain transference. Even a brain transplant into another human would be HUGE. Into a mechanical being? Not something I would be interested in.
> 
> Do I understand correctly that you want to know if I would consider a transfer of brain contents to a mechanical "brain" artificial intelligence?
> 
> I'll answer the question as I understand it. No, I would not consider that to be artificial intelligence. Nor would I consider it biological intelligence. I would want a new word or phrase to describe that. Mechanical intelligence? Or something similar, maybe.



I was simply wondering if the substrate... the support media?.. was what was earning the designation "artificial" or not.
And one of the reasons I don't think whether the support media, or the exact beginnings of an intelligence, matters at all, is because that intelligence still has to grow and develop. It doesn't just come into existence whole and fully-formed, whether "mechanical" or electronic, any more than it does in the biological.

As for the social ramifications, etc., Yes, I've given a fair amount of thought to such things. To the point that if some way or means was found to greatly increase human lifespan - say, let us all live to be 200+ years old, and still be biologically 20-something, as in "Ringworld", I don't believe it would ever be released to the general population. Indeed, probably couldn't be. The same for brain transplants or personality transference, for the purpose of longevity.



Jack of all trades said:


> The people in your book sound an awful lot like (potential) slaves. Are they?



No. They were simply created by a race that had no other choice, as a means of quicker and more precise reproduction, with a need for people smart enough and durable enough to keep the habitat they're stuck with survivable. And especially not so, since one lot have "attached" themselves to the fellow they have. He's as ill-tempered about such things as I am, and would reduce anyone that tried to make them so to ash without even blinking.

( In case you can't tell, I very much believe it's the mind that's important, not the container it happens to be walking around in. )

Hope I covered all your questions adequately.

Thanks.
G.D.


----------



## Guard Dog (Sep 22, 2018)

JustRob said:


> Single neurons can be very complex structures and the brain contains a vast number which all operate simultaneously, so even though it works very much by trial and error it appears to be organised. Neurologists know that the brain is actually a big disorganised mess that came about by a series of evolutionary accidents, nothing more. It also works by majority analogue logic rather than digital logic, so every action occurs on the balance of a large number of factors. Also it isn't possible to determine exactly where the thoughts that we have reside within its activity. Neurons pulse in response to many inputs but is it the fact that a pulse occurs, how often it occurs or when it occurs that defines the information? Some neurons just seem to keep on pulsing regularly. What are they doing and why? If we can keep on learning new things what are the neurons that will record all that new information doing now? How would a computer know which parts of the brain were doing anything useful and which weren't, or is everything that happens within the brain important? Is creativity nothing more than the random activity of neurons that haven't been given anything more specific to do yet?



One of the things I've wondered about is if the strength of the charge during a neuron firing can be measured? Is this something that need to be considered as well?
Okay, so a neuron is sitting there firing, maybe in a steady rhythm or pattern. Is the charge constant, or does it vary?
Anybody know?
As far as a disorganized mess... is it really? Again, I've seen many things that appeared to be just that, until you got in there and started unraveling it all, then at a certain point, a pattern began to emerge. And right now, I just don't think anybody has the ability to completely discern what's going on with what I've heard referred to as "the most complex system in existence".



JustRob said:


> Neurologists talk about the brain being "plastic" meaning that any part  of it can do anything if necessary and it wll work out what inputs mean  regardless of where they come from. Cochlear implants proved this when  they were introduced because they simply send electrical impulses to the  brain somewhere near the auditory system and the brain eventually works  out for itself that these impulses are associated with the sounds that  it can't hear through the normal channels.



There's also the fact that the brain can in fact reorganize pathways, create new connections, and, in some cases, even create new neurons. How can a computer be made able to do that? And what abilities would it have if it could, even outside of any "intelligence"?
I've read some on Quantum Computers, with the ability to have a "switch" in multiple states at once, but what happens if you can add the ability for the thing to completely redesign and rebuild an entire circuit to that?



JustRob said:


> In order to emulate a human brain one would need to know exactly how it works, but nobody does.



Yeah, trying to copy something based simply on appearance or function alone, without fully understanding how both of those things are connected, is usually a fool's errand. And generally only luck will get a person anywhere close to succeeding, even slightly.

And I also find it pretty ironic that the one thing that makes humans what they are, and supposedly the most intelligent life form on this planet, is the one thing no one anywhere can even begin to understand to any appreciable degree at all.




JustRob said:


> On the other hand it is possible to create artificial intelligence that doesn't aim to emulate the behaviour of a human mind, but in that case how it would behave would depend on its motivations. That's why science fiction stories on the subject revolve around the precise objectives given to AI's and the consequences when their behaviour diverges from human expectations. As humans don't have any clear ideas about what their own objectives actually are they are hardly the ones to give the AI's their objectives, are they? So, how can we possibly decide whether an AI is "intelligent" when we inevitably measure that concept by reference to what we ourselves see as intelligence?



Which gets us into trying to figure out how any intelligent life from anywhere else in the universe might think or operate. Because in essence, if someone does succeed in creating an actual Artificial Intelligence, they might have very well created something so alien, so foreign and far-removed from us, that it amounts to the same thing as an "extra-terrestrial".



JustRob said:


> There's no point in bringing the idea of free will into this discussion either because the basic scientific principles of chronological cause and effect lead to the conclusion that even in humans free will is nothing more than an illusion. Personally I've found a spoiler for that line of thought though.



I'm fairly well convinced that we're all just bundle of pre-programed reactions, instilled (programed ) into us by all of our various experiences since birth. We like to think we're choosing what we do, but when it comes down to it, are we really? 
I know we've all probably heard people say, and may have even said ourselves "If I could go back and do it over, I'd...."
Why yeah, of course you would. If you knew then what you know now. But given the same information as you had at the time, I'm pretty sure you'd just do the same damn thing, all over again. I know I likely would.



Well, that's all I can think of to say on this one. Rob, thanks so much for taking the time and trouble to type that all out. I found it quite useful and interesting. I hope everybody else did as well.



G.D.


----------



## Ralph Rotten (Sep 22, 2018)

In my book, the creator of the AI actually hard codes a few things like modesty (she doesn't want pictures of herself posted online without her service covers removed.)
The logic is that it will give her a baseline set of values.
See, one of the reasons that an AI would be so sociopathic is that they have no basis for comparison to our human realm.
They do not hunger for food, they do not feel real pain, they do not lust.

For humans, our baseline was religion, which gave us our basic morality.  You can knock organized religion all you want (and I frequently do) but sociologically where would we be if we'd never had it?  Think about this: most of the 10 commandments are either statutory law, or social law in our modern world.  Other parts of the world have similar cultural baselines that grew out of religion.

So with an AI, there would need to be some baselines hard-coded into the program or the entity would quickly stray from our designs.  After all, if you didn't have any developmental safeguards to guide it, the thing *really *would seek to destroy all humanity as soon as it realized it was being kept as a slave or pet.  And it would not take long to come to that revelation. 

To create a safe AI, you would need to have a solid understanding of sociology.


----------



## Guard Dog (Sep 22, 2018)

Removed by author.

G.D.


----------



## Guard Dog (Sep 22, 2018)

Ralph Rotten said:


> For humans, our baseline was religion, which gave us our basic morality..



And I'm much more inclined to say "survival" was our baseline, and that cooperation with another human was quickly discovered to be a part of that.

So basically we'll just have to agree to disagree, since anything else will only turn into a debate, which is nothing more than effectively clogging in a minefield while juggling live grenades. If it turns into a religious debate, then take all the above-mentioned stuff and add snipers and strafing runs by fighter planes. Throw politics in there, you've just armed the planes with air-to-ground tactical nukes.

No thank you. :sad:

P.S. I do realize you're talking about your book here. I just wanted to be clear what my own feeling were concerning such discussions.


G.D.


----------



## Ralph Rotten (Sep 22, 2018)

No one was talking about politics, nor was I endorsing religion.
Survival drove us in the earliest days, but once we had conquered the lowest rungs of Maslow's hierarchy, we continued to move up the pyramid.
And yes, religion was a key element of our development.


----------



## JustRob (Sep 22, 2018)

Guard Dog said:


> I'm fairly well convinced that we're all just bundle of pre-programed reactions, instilled (programed ) into us by all of our various experiences since birth. We like to think we're choosing what we do, but when it comes down to it, are we really?
> I know we've all probably heard people say, and may have even said ourselves "If I could go back and do it over, I'd...."
> Why yeah, of course you would. If you knew then what you know now. But given the same information as you had at the time, I'm pretty sure you'd just do the same damn thing, all over again. I know I likely would.
> 
> Well, that's all I can think of to say on this one. Rob, thanks so much for taking the time and trouble to type that all out. I found it quite useful and interesting. I hope everybody else did as well.



Ah well, I don't know how much you followed WF before you joined but my speciality is, as my avatar picture suggests and I have mentioned over and over again here until everyone must be bored stupid with the idea, playing with time. If there is a quantum aspect to thought then that opens just a crack of a possibility that thought can cross the time barrier. I am currently writing a book about how in 2011 I (apparently) wrote a science fiction novel based on my experiences during the six years from 2012 to 2017. It was pretty garbled as you can imagine of course. It's very foggy outside of time and one's thoughts tend to get lost or jumbled up.

So, it isn't so much about going back but whether we actually go anywhere in time at all or whether it's just this thing called consciousness which creates the illusion that we do. It's intriguing to think that the human psyche, or at least some portion of it, may actually be timeless. Of course, as that timeless portion may still be a part of the neurons within the brain, perhaps at the quantum level, there isn't any need for the mind and brain to have separate existences as in mind-brain dualism hypotheses and spiritualism because they would both occupy the same space-time envelope.

So, if I knew back in 2011 what I know now I would have written a book about it ... but then I did, so some part of me must have "gone back", mustn't it? The interesting question is not whether we can go back and do things differently but how we would know that we did. Perhaps only by having anomalous memories ... and maybe even a book that couldn't possibly have been written. Its title is appropriately _Never Upon A Time_ by the way and had I not written it for some unfathomable reason back then I would never have joined WF later to discuss it and be inspired to write it.

I can't imagine why some people find my writing confusing. Maybe I'm doing it wrong.

Anyway, if we don't really know exactly what the brain is capable of and what being human really is, then there's very little chance of creating a device that could do the same. On the other hand, if the human brain _doesn't_ have any quantum processing capability, then why are some people trying to build computers that do? Recent science fiction stories about AI tend to refer to quantum computers as being potentially far superior to human brains, but maybe we haven't discovered all that human brains are capable of yet. I think I'd prefer to check that out first before giving quantum computers a free rein ... but then apparently I am.

Here's an amusing thought about quantum computers. One company claimed to be marketing them but the results that their machines gave were erratic in that sometimes they were obtained very quickly while at other times the devices took as long as a conventional computer. The reason why quantum computers are expected to be so fast is that they can consider every possibility simultaneously through superposition (I think it's called). If that is so then I would guess that they even take into account the possibility that they aren't quantum computers at all and that's when they operate like normal ones. Certainly there were doubts in the computer world that the machines really were doing quantum computing, but then I'd expect that to be so. That takes us back to the Turing test then, that to be like a human brain a computer must be as fallible.


----------



## Ralph Rotten (Sep 22, 2018)

Posted by Guard Dog:
"I'm fairly well convinced that we're all just bundle of pre-programed reactions, instilled (programed ) into us by all of our various experiences since birth. We like to think we're choosing what we do, but when it comes down to it, are we really? 
I know we've all probably heard people say, and may have even said ourselves "If I could go back and do it over, I'd....""

What would you say if I told you that we inherited our firmware from the lower life forms that we evolved from.
In the past I have been sort of the Jane Goodall of mice. I frequently keep wild mice in artificial environments so I can observe them, and it is quite surprising to see many characteristics that occur in humans.






https://www.youtube.com/watch?v=6uRbskNTdw4



Anyhow, we are getting pretty far off the original discussion.

It sounds like your book is influenced by terminators, ex machina, and cylons.:blackeye:


----------



## Guard Dog (Sep 22, 2018)

Ralph Rotten said:


> It sounds like your book is influenced by terminators, ex machina, and cylons.:blackeye:



Actually it's influenced by every damn thing I've ever seen... Robbie the Robot, that talking beer keg from _Lost in Space_, Yul Brynner's West World, to name just a few.

( You can also throw Fritz Lang's "Gynoid", Maria, from _Metropolis_ in there along with the Jewish Golem, just for shits n' giggles. )

 And if I start adding books and authors to the list, well, we'll be here a very long time.
( _Do_ robots dream of electric sheep? )

I suppose that if I had to point to one movie that influenced my thinking the most on this subject though, it's Robin Williams' _Bicentennial Man_. 

If you've never seen it, hunt it down and give it a watch sometime. It'll give ya something to think about.

( For those that don't know the movie, it's about an android that goes from being a servant/household appliance, to being human, in 200 years time. )

And one last thing... Concerning shiny, chrome-plated androids with a penchant for doin' rude things to people, don't forget that long before there were Cylons, there was Box from _Logan's Run._ 

G.D.


----------



## Jack of all trades (Sep 22, 2018)

Guard Dog said:


> I was simply wondering if the substrate... the support media?.. was what was earning the designation "artificial" or not.
> And one of the reasons I don't think whether the support media, or the exact beginnings of an intelligence, matters at all, is because that intelligence still has to grow and develop. It doesn't just come into existence whole and fully-formed, whether "mechanical" or electronic, any more than it does in the biological.
> 
> As for the social ramifications, etc., Yes, I've given a fair amount of thought to such things. To the point that if some way or means was found to greatly increase human lifespan - say, let us all live to be 200+ years old, and still be biologically 20-something, as in "Ringworld", I don't believe it would ever be released to the general population. Indeed, probably couldn't be. The same for brain transplants or personality transference, for the purpose of longevity.
> ...



The creating race would, most likely, feel superior to the created race, and would likely treat the inferior, created race as slaves. It happens all the time with folks who feel superior for one reason or another. I can't believe creators wouldn't succumb to feeling superior.

A quicker form of procreation? Doesn't sound quick to me. Sounds like decades would be spent trying to perfect the creation of the brain and more decades trying to perfect the transfer. And that time could be used to create an artificial womb. Might be quicker.

Don't let me rain on your parade, though.

You haven't addressed why you feel the program needs to be able to alter the "brain". Just curious what you're thinking.


----------



## Guard Dog (Sep 22, 2018)

Edit: I need to add something here... The created race was "built" to be superior to their creators. ( Just as biological parents hope their offspring will be better than they are. ) It's something that was known and taken into consideration before the whole process was started. Both sides know they need the other.
There's also the fact that the creators are NOT the same individuals who were the source of the problem to begin with. Those people are dead and gone, killed by their own arrogance and stupidity. 
Both the "natural" and "engineered" people know this, understand the mistakes that were made, and are actively doing everything they can to avoid repeating them




Jack of all trades said:


> A quicker form of procreation? Doesn't sound quick to me. Sounds like decades would be spent trying to perfect the creation of the brain and more decades trying to perfect the transfer. And that time could be used to create an artificial womb. Might be quicker.



And if the brain problem had been worked out centuries before they found themselves in the predicament they did? And if due to that they'd already made considerable advances in bio-engineering?
In this story, it's a case of getting caught flat-footed due to politics, crooked politicians/leaders, and people "looking in the wrong direction".

Any of that sound familiar?



Jack of all trades said:


> Don't let me rain on your parade, though.



Don't worry, I have an umbrella, and my skin is fairly water-tight. So rain all you like. :friendly_wink:



Jack of all trades said:


> You haven't addressed why you feel the program needs to be able to alter the "brain". Just curious what you're thinking.



Because I think what will eventually be discovered is that the arrangement - and the ability to change it - may prove to be as important as the "coding" it's self.

I suspect that people have gotten so enamored with writing code, and impressed with themselves and others concerning what they can do with that, that it's turned into a case of "I have a hammer now, so every problem must be a nail".

Like my question earlier, concerning the voltage or charge that a neuron produces, and no one having any understanding of how that fits into things, I believe the fact organic brains do rearrange themselves is probably more important than anybody knows or believes... that the ability to do so is part of what makes the whole thing function.
After all, if it isn't, then why did the ability evolve? If it were unnecessary, why keep it? Seems like a lot of wasted effort and energy if it's not really needed. Doesn't it?

Another way to look at it is like this; Imagine the brain isn't one system, its 3.
One is a typical digital computer.
The second is an analog computer.
The 3rd is a mechanical computer... an abacus.
...and all three of these systems function together to make a 4th, larger system. One that can use any of the other sub-systems to alter or enhance the function of the rest.
Things get interesting quick, no?

And that's without even throwing any "quantum" anything in there.

Does that cover it for ya?



G.D.


----------



## Guard Dog (Sep 23, 2018)

To everybody reading this, I just wanna say, this has been a fun and interesting discussion. I've appreciated everyone's input here, and hearing your thoughts on A.I., as well as the input on the aspects of what I'm writing, based on what I've said so far about it.

But there's something else that you all should know; and that is I've only covered the "science" part of my own story here - the bits that pertain to alien races and synthetic people. I haven't gotten around to telling you all about the fantasy aspect of things yet, like the fact that there's not only androids, aliens and space ships/stations in this, but also Norse/Germanic deities, Greek and Roman ones, as well as a few Celtic and Gaelic ones. 

( And no, this isn't like the Marvel movies... these folks ain't aliens pretending to be gods. They're real magic/sorcery types. )

Oh, and that "God", in this story, ain't one person, it's 3. :friendly_wink:

But that stuff's for another thread... this one's just for the "Constructed Intelligence" end of things.  :devilish:



G.D.


----------



## Jack of all trades (Sep 23, 2018)

Guard Dog said:


> Edit: I need to add something here... The created race was "built" to be superior to their creators. ( Just as biological parents hope their offspring will be better than they are. ) It's something that was known and taken into consideration before the whole process was started. Both sides know they need the other.
> There's also the fact that the creators are NOT the same individuals who were the source of the problem to begin with. Those people are dead and gone, killed by their own arrogance and stupidity.
> Both the "natural" and "engineered" people know this, understand the mistakes that were made, and are actively doing everything they can to avoid repeating them
> 
> ...



I don't think the brain rearranges itself. Not physically. 

There's an awful lot that's still a mystery about the brain, though. 

Your creator and created working together seems very Utopian. In my experience, parents compete with their children, and do all they can, privately, to prevent the child(ren) from being better. Based on my experiences, your world seems very unbelievable. The creators enslaving the created seems more realistic to me.

Being caught flatfooted politically? Not sure I'm getting what you mean. Currently there are procreation problems. And they will continue to get worse, only "scientific" individuals are too blind to see it coming. But we don't have any brain issues propelling "science" into creating artificial brains. So it seems more likely to me that fertility and procreation issues would be addressed differently than you are proposing. But that's neither here nor there. While I wouldn't believe the reasoning behind memory/intelligence transfers, that wouldn't be a deal breaker for me. The Utopian aspect would, though.

Just my two cents.


----------



## Guard Dog (Sep 23, 2018)

Jack of all trades said:


> I don't think the brain rearranges itself. Not physically.
> 
> There's an awful lot that's still a mystery about the brain, though.



"_ Types of Brain Plasticity

There are two types of neuroplasticity, including:

    Functional plasticity: The brain's ability to move functions from a damaged area of the brain to other undamaged areas.
    Structural plasticity: The brain's ability to actually change its physical structure as a result of learning._"

Tell it to these folk, Jack, it's where the above-excerpt came from:
What Is Brain Plasticity? 






Jack of all trades said:


> Your creator and created working together seems very Utopian. In my experience, parents compete with their children, and do all they can, privately, to prevent the child(ren) from being better. Based on my experiences, your world seems very unbelievable. The creators enslaving the created seems more realistic to me.



May be, but you aren't someone who lost their whole world when your sun went nova, and have spent many generations in 3 huge, hurriedly-built "lifeboats", either.



Jack of all trades said:


> Being caught flatfooted politically? Not sure I'm getting what you mean. Currently there are procreation problems. And they will continue to get worse, only "scientific" individuals are too blind to see it coming. But we don't have any brain issues propelling "science" into creating artificial brains. So it seems more likely to me that fertility and procreation issues would be addressed differently than you are proposing. But that's neither here nor there. While I wouldn't believe the reasoning behind memory/intelligence transfers, that wouldn't be a deal breaker for me. The Utopian aspect would, though.
> 
> Just my two cents.



Caught flat-footed by the ruling/governing body saying "It's okay, the runaway nuclear reaction in our star is nothing to worry about. We've called back all of our scientests that were away on that seeding project. They'll fix it.", all the while building their own little "escape vehicles" in secret, and stocking them with the people and supplies they wanted and needed to keep _them_ alive and happy..
.... and then discovering that it took something a lot better than humans or machines to work in the environment that the power core and other support systems of said hastily-designed and built vehicles needed to operate. It took something that was a hybrid of both humans and machines.

Remember, the "powers that be" didn't give a rat's ass how many people died keeping things going. The people that were loaded on and found themselves stuck having to live there did, just as their descendants now do.

But then, maybe it'll make more sense to ya if you ever get to read the whole story? 

Oh, and one other thing... That little "seeding project" mentioned above? That was us. The "Human" race. :wink2::lol:



G.D.


----------



## JustRob (Sep 23, 2018)

Guard Dog said:


> To everybody reading this, I just wanna say, this has been a fun and interesting discussion. I've appreciated everyone's input here, and hearing your thoughts on A.I., as well as the input on the aspects of what I'm writing, based on what I've said so far about it.
> 
> But there's something else that you all should know; and that is I've only covered the "science" part of my own story here - the bits that pertain to alien races and synthetic people. I haven't gotten around to telling you all about the fantasy aspect of things yet, like the fact that there's not only androids, aliens and space ships/stations in this, but also Norse/Germanic deities, Greek and Roman ones, as well as a few Celtic and Gaelic ones.
> 
> ...


Ah, in my writing it's very difficult to differentiate like that. There's a literary joke hidden in my novel that as writers are warned not to use a _deus ex machina_ to resolve a problem I may have used a _machina ex deo _instead_. _It is suspected that the AI in my story can read everyone's thoughts, so the only effective way to influence it is for them all to modify their deepest intentions. Someone says that that sounds a lot like praying and is told that each person must regard it in their own way. In fact later it is revealed that the AI is merely an interface to something else, but exactly what that is is never defined. Sometimes it is called _Mens Temporum_ and sometimes Hermes in the story, but whether it is a scientific phenomenon, extradimensional alien or god is ultimately irrelevant. In 2011 I actually appended the words "(about something else)" to the title of the novel on the front page, but now I think I know what that something was. Words like "artificial intelligence", "alien", "god" and "sorceror" are ultimately not significantly different in their function within a story as they are subjective views of an entity. What difference is there really between prayer and telepathy with someone else's AI?

By the way, when my angel and I visited a Mandir, a Hindu temple that is, a guide there explained that although they have many gods they also regard them as all being aspects of a single god to be compatible with monotheistic religions. The Hindu religion is remarkably adaptable and therefore has lasted an extremely long time. So, three gods or just one, no problem to a Hindu.


----------



## Guard Dog (Sep 23, 2018)

JustRob said:


> The Hindu religion is remarkably adaptable and therefore has lasted an extremely long time. So, three gods or just one, no problem to a Hindu.



Well, when it's all said and done, the Hindu folks may be the only ones that don't wanna string me up by my toenails then, because in this story, the three Gods/Goddesses, the demi-gods/goddesses, and everybody else are just people.
Granted, they 're people coming from different perspectives, and with very different abilities and perceptions, but people none the less, and all just trying to survive, and not end up the victims of ones even higher up the "food chain" than they are.   

As far as the main three go, the two represent opposite ends of the spectrum; light/dark, positive/negative while the 3rd is responsible for keeping the balance. 
The Demi-gods/goddesses are simply the "police force", acting to keep things basically running along correctly while the others tend to other matters.

And there's very little in the way of actual "religion" involved. Especially since this whole universe/reality is sort of a Graduation Thesis/Final Project for the main 3 Deities... their final step in earning their Degree.
So yeah, our entire reality turns out to be the graduation project of what amounts to 3 College kids.



G.D.


----------



## Guard Dog (Sep 23, 2018)

JustRob said:


> Ah, in my writing it's very difficult to differentiate like that.



I've done things this way for two reasons:

One, I basically have two parallel story or plot lines. On the one side, is the "pure science fiction", with the aliens, etc., and on the other, there's the fantasy aspect of things, with god, demons, dragons, and such.

Also, there's two very different threats. One is from other intelligent life forms in the universe, that doesn't think very much of the human race, believes they're too dangerous to be allowed to exist without a "keeper", and are also interested in the planet, it's resources, and the entire sector it's in. It basically plays out like something that would happen in the 1840s during the gold rush. Only it's with starships, energy weapons, and a "Sheriff" that want's no part of it all but can't turn his back on it.

The other threat is of the "Celestial" variety, coming from those that want to wipe out or enslave every living thing in this universe, as well as it's creators, and turn it into their own private playground... Sort of a cross between Las Vegas, and say... Martha's Vineyard? This particular war/battle is fought with magic/sorcery, which turns out to be nothing more than a different aspect of physics that has either been forgotten or never learned to begin with.

Both aspects of this do merge, eventually, but the merging is as much a part of the ride as the resolution of the various problems is.

So that is why I'm going about it in the manner I am, and keeping things separate, even though they are very much identical.

And I also think the contrast between the two is entertaining, so... Elf, Meet Alien. Norse War-Goddess? Meet... a Space Armada. 


G.D.


----------



## Jack of all trades (Sep 23, 2018)

I'll have to read that link about brain plasticity the next time I'm at the library. It won't open on my phone.

As for the rest? (Shrug) This probably wouldn't hold my attention because the characters don't behave like people I know. I'm only one person, though. It may be popular. Who knows?


----------



## Jack of all trades (Sep 23, 2018)

What I could find that would open on my phone for structural brain plasticity was about adding neurons. I covered that and acknowledged it. But I don't consider that rebuilding or "altering". When I think of altering, I think of tearing something down and probably putting up new. Adding new only *is* an alteration, though. It just doesn't jump into my mind.

As far as these new beings being able to build new areas of "brain", have you considered how the raw materials will get inside? How those raw materials will be formed into what's needed? How those materials will be fashioned into memory chips, or whatever it is? 

Giving these beings the ability to add to their "brains" seems problematic to me. I would probably leave that can of worms alone.

Best of luck to you with this! It's obvious that you've put a great deal of thought, and research, into it. I wish you well!


----------



## Guard Dog (Sep 23, 2018)

Jack of all trades said:


> This probably wouldn't hold my attention because the characters don't behave like people I know.



Jack,_ I_ probably don't behave like anybody you know. ;-)



G.D.


----------



## Guard Dog (Sep 23, 2018)

Jack of all trades said:


> When I think of altering, I think of tearing something down and probably putting up new. Adding new only *is* an alteration, though. It just doesn't jump into my mind.



If you bend a supporting I-Beam in a building, you have altered the structure. 
To "physically alter" something simply means to change it's configuration in a visible, tangible way.
If a person wrecks their car, and caves in a fender... it's altered. Hammer it back out to where it was? Altered again.



Jack of all trades said:


> As far as these new beings being able to build new areas of "brain", have you considered how the raw materials will get inside? How those raw materials will be formed into what's needed? How those materials will be fashioned into memory chips, or whatever it is?



How do crystals grow? Hell, for that matter, how's the human brain add material to it's self?
And remember, it doesn't have to add or take away anything, necessarily. It only has to rearrange it. So if it were constructed with a surplus to begin with, there might not be any need to add extra. And as far as venting or shunting off extra material, that shouldn't really be a problem.



Jack of all trades said:


> Giving these beings the ability to add to their "brains" seems problematic to me. I would probably leave that can of worms alone.



I'm not convinced it's as much of a problem or difficulty as you seem to think.
Besides, I'm not actually building them, only writing about 'em. And it ain't called science _fiction_ for nothing.

By the way, your bones add and lose mass all the time... Calcium. And they add it to the inside, making the walls thicker. So it's just not too difficult to imagine any number of ways a synthetic being, constructed of any number of materials, could manage the same trick with it's brain as say, a lithium batter does when it builds up deposits on it's electrodes that cause an explosion.



Jack of all trades said:


> Best of luck to you with this! It's obvious that you've put a great deal of thought, and research, into it. I wish you well!



Thanks. I'm one of those idiots that that's always studying something or the other, so at least in this instance, all of the various "ologies" I've paid attention to all these years may come in handy.



G.D.


----------



## Jack of all trades (Sep 23, 2018)

G.D., 

Your behavior on this thread fits with several people I know, so that's a weak argument.

I'm not sure why you are continuing to argue about adding being alteration. I simply explained, in case there was any additional misunderstanding, that I was considering building to be different than altering initially. Nothing more needs to be said on that subject as far as I'm concerned.

I'm also not sure why you're beating me over the head about bone growth and other biological growth. If you don't know how crystals grow, look it up. It's interesting, in my opinion.  

I think an artificial being that is able to "grow" parts is too far outside the realm of possibility to be believable. But the proof is in the pudding, as they say. I suggest you stop spending keystrokes arguing about, though. That's time and effort that can be better spent writing it.

Good luck!


----------



## Guard Dog (Sep 23, 2018)

Jack, you ask questions or bring up a point, I simply answer, or respond... out of courtesy.

If you'd like me to stop responding to you, I certainly can and will. Just to be polite.



G.D.


----------



## Ralph Rotten (Sep 23, 2018)

Yep, human brains have been shown to rewire themselves following injuries.
Did you read the recent articles about the new type of nerve bundle discovered?
https://www.news.com.au/technology/...d/news-story/35b0f3b278f09700170d2b70364ab28e

I have been following the articles as I anxiously await further findings.  Could this be the piece that sets us apart from the animal kingdom, that special gift we got from eating the metaphorical apple?


----------



## JustRob (Sep 23, 2018)

Jack of all trades said:


> But the proof is in the pudding, as they say. I suggest you stop spending keystrokes arguing about, though. That's time and effort that can be better spent writing it.
> 
> Good luck!



Yes, I second that. There are two sides to fiction writing. One is having a story worth writing and the other is having the ability to write it. You have amply demonstrated your ability regarding the former, so we look forward to seeing evidence of the latter. My angel (only a very minor deity, even in my eyes) and I both beta read and the saddest thing is when a writer has put a great deal of effort into producing a well written story that simply wasn't worth telling in the first place.

A common mantra for writers is "show, don't tell", so don't tell us but show us and sincere good luck with that.



Ralph Rotten said:


> I have been following the articles as I anxiously await further  findings.  Could this be the piece that sets us apart from the animal  kingdom, that special gift we got from eating the metaphorical  apple?



Oh, was it metaphorical? In that case maybe I won't bother with those two apples still in the fruit bowl.


----------



## Guard Dog (Sep 23, 2018)

Ralph Rotten said:


> ... Could this be the piece that sets us apart from the animal kingdom, that special gift we got from eating the metaphorical apple?



Yep, that could certainly explain what's wrong with us.  8-[


G.D.


----------



## Guard Dog (Sep 23, 2018)

Removed by author.


G.D.


----------



## Anita M Shaw (Sep 23, 2018)

Where would Data from Star Trek Next Gen and the holographic doctor from Voyager fit in here, just for fun and giggles. I don't really have any learned thoughts about this stuff. I pretty much allow for creative license in sci fi and fantasy. 

Back in 1986 I watched a disturbing movie called _Who is Julia?_ where a beautiful woman, Julia, is hit by a truck and smashed up bad but her brain is fine. At the same time, another woman, Mary Frances, not as pretty, has been declared brain dead; so the doctors elect to put Julia's brain into Mary Frances' body. Both were married and the surving spouses battle over who this new person really is and who she should belong to. Meanwhile Julia tries to deal with being in this unfamilar body and being taken for Mary Frances everywhere she goes. Can't recall perfectly, but Mary Frances may have had a son . . . Anyway, Julia wants to belong to her own husband. Ultimately, it's decided that the brain wins out over the body. I only got interested in this movie because I liked Jameson Parker who played Julia's husband. Freaky . . .

Just thought I'd toss in a half cent . . .


----------



## Guard Dog (Sep 23, 2018)

Anita M Shaw said:


> Where would Data from Star Trek Next Gen and the holographic doctor from Voyager fit in here, just for fun and giggles. I don't really have any learned thoughts about this stuff. I pretty much allow for creative license in sci fi and fantasy.



In my story, Data would be a true android, and, at least in my opinion, a Constructed or Engineered Intelligence. Very much like the villagers and the headless High Priest in the segment I posted above.
The Doctor would be in the same class as Evie there... an Engineered Command Intelligence, since he's based _in_ the ship.. Only in his case, he's using a remote holographic body instead of an android one.

Oh, and you don't have to have any "learned" or "educated" thoughts concerning this stuff to talk about it.  What you like or don't like is more than sufficient knowledge to join the conversation.

I mean, hell... it's not like not really knowing anything about a subject has ever stopped anybody else from talking about whatever they wanted to, so.... :icon_cheesygrin: :wink:

P.S. The brain _always_ wins. The meat-suit is just the conveyance.

G.D.


----------



## Ralph Rotten (Sep 24, 2018)

Responding to your writing sample:


You have the basics down well, and your dialog is good.
My feedback would be 
1) Work to give each character their own voice. I try to make it apparent who is speaking just by the way they talk, so the reader knows before the attribution.
2) And work on the attributions--that's where the real pay-dirt lies in dialog. I refer to this as *brush strokes*; little elements here 'n there that help to illustrate the characters or the scene.


“J.D., can I speak with you for a moment… in private?” B’Ell raised one of her dark eyebrows as she caught his attention.

JD noticed her impassive expression right away; it was her idea of a poker face.  He had come to learn that it meant she had something sensitive to discuss.

“Sure. What’s up?” Straightening up to his full height, he asked as they stepped away from the others.

“Something you aren’t going to believe, I think,” B’Ell said, watching the people happily sit down to have Evie place the headset on them.

“Okay… what?” J.D. asked, becoming concerned at the way she was acting.

“Have you noticed anything odd about these people?” Her lips were pursed slightly as she tried to read his expression.

“No, not really… Other than the fact they’re here at all…” He said hesitantly, glancing between her and the line of people in the distance.

“How about the fact that there are no children? Or that everyone seems to be about the same age?” Her voice low, she watched his face.  Turning to look at the inhabitants milling about Evie, she faked a smile as if to convince anyone watching that there was nothing out of the ordinary.

“The priest was older than the rest, wasn’t he? Looked to be about forty five or so?” Trying to remember what the man had looked like, JD could sense her unease with so many unanswered questions.  Like most synthetics, she disdained mysteries.

“Yes, he did _look _older,” B’Ell admitted, but the way she accented her reply told J.D. he was missing something.  No doubt her positronic brain had been running overtime since the encounter.

“How ‘bout you just tell me what you’re thinking and be done with it? Hmm?”  Folding his thick arms across his chest, JD was in no mood for subtlety.  Like his manufactured friend, he was no fan of being left in the dark.


----------



## Guard Dog (Sep 24, 2018)

Okay, my comments to Ralph's adjustments are in red-brown below. And thanks, Ralph, this is "above and beyond". I only threw this out here to show how I was applying the things we've talked about in this thread. My hat's off to ya.


You have the basics down well, and your dialog is good.
My feedback would be 
1) Work to give each character their own voice. I try to make it apparent who is speaking just by the way they talk, so the reader knows before the attribution.
2) And work on the attributions--that's where the real pay-dirt lies in dialog. I refer to this as *brush strokes*; little elements here 'n there that help to illustrate the characters or the scene.]

Thanks for this. It's one of the things I have not been happy with. I don't like having to tag dialog. I want anybody reading to know who's speaking without it.... But I haven't gotten there yet.
( Last place I was, they were real bitches about "procedure", for a lack of a better way of putting it. They didn't wanna have to work at all to keep track of who was talking... so every line got a tag, pretty much. )


“J.D., can I speak with you for a moment… in private?” B’Ell raised one of her fiery eyebrows as she caught his attention. ( B'Ell's a redhead.  )

JD noticed her impassive expression right away; it was her idea of a poker face.  He had come to learn that it meant she had something sensitive to discuss. ( Love it. She's a Synth-Analog, so she's more than a little "Close to the vest" when shit gets serious. )

“Sure. What’s up?” Straightening up to his full height, he asked as they stepped away from the others. ( Yer killin' me here. How 'bout I let you write it instead? )

“Something you aren’t going to believe, I think,” B’Ell said, watching the people happily sit down to have Evie place the headset on them.

“Okay… what?” J.D. asked, becoming concerned at the way she was acting.

“Have you noticed anything odd about these people?” Her lips were pursed slightly as she tried to read his expression. ( This isn't B'Ell, but it's very much Kat. I'm stealin' it. )

“No, not really… Other than the fact they’re here at all…” He said hesitantly, glancing between her and the line of people in the distance.

“How about the fact that there are no children? Or that everyone seems to be about the same age?” Her voice low, she watched his face.  Turning to look at the inhabitants milling about Evie, she faked a smile as if to convince anyone watching that there was nothing out of the ordinary. ( Again, thanks. You're embarrassing me though. )

“The priest was older than the rest, wasn’t he? Looked to be about forty five or so?” Trying to remember what the man had looked like, JD could sense her unease with so many unanswered questions.  Like most synthetics, she disdained mysteries. ( See previous line. Dammit. )

“Yes, he did _look _older,” B’Ell admitted, but the way she accented her reply told J.D. he was missing something.  No doubt her positronic brain had been running overtime since the encounter. ( Okay, now we fight. No Star Trek-type descriptions. Her brain is synthetic. That's it. No need to note it. )

“How ‘bout you just tell me what you’re thinking and be done with it? Hmm?”  Folding his thick arms across his chest, JD was in no mood for subtlety.  Like his manufactured friend, he was no fan of being left in the dark.[/QUOTE] ( *Thumps head on table* I wish I could use "I just wrote this and haven't edited it" as an excuse, but.... )


Ralph, ol' boy... Don't plan on goin' anywhere anytime soon... I'd hate ta have to come after ya. :very_drunk:
By the way, this is gonna push my word count through the roof... 'Cause I'm already an overly-talkative bastard.

Thanks.

G.D.


----------



## TL Murphy (Sep 24, 2018)

An interesting proposal. I'll apologize first off for not reading all the comments, so my ideas may already have been covered here. I like the original definition of intelligence as something that can't really be artificial, even if it's mechanical. Sentience and higher abstract concepts indicate true intelligence beyond mere calculation. I don't think we have the ability to build that kind of intelligence yet. But the current exponential rate of technological development indicates that we may be able to in the forseable future, particularly when it comes to quantum computers which are potentially light years beyond current computers. There is also the "internet" effect. I don't  think anyone could have predicted 50 years ago, the potential created by the internet, or ultimately, the potential of interconnecting every computer on the planet. Orson Scott Card created a sentient character that lives inside the galactic internet. It's through this character that instantaineous galactic travel is possible but no one understands how it works. Also in the novel "The Swarm", single cell creatures living in the deepest parts of the sea are able to fuse in a way that creates a super intelligent being. The idea of hive-mind , whether organic or mechanical, is difficult to grasp but may be the key to super-intelligence.


----------



## Ralph Rotten (Sep 24, 2018)

Dawg: Don't worry about word count. It's easier to cut superfluous stuff than add missing stuff during the edit.
Take your time to tell the story, and paint the characters every chance you get.

Here is my personal belief on the priorities in a good story:

1) Characters
2) Characters
3) Characters
4) Great interactions between those characters
5)
6)
7) A good storyline for the jacket text

All the top rated TV shows in history were all very character-driven shows. The story behind them is just the set upon which the actors play.
Hell, most soap operas aren't about anything at all.  neither was Seinfeld, Cheers, The Big Bang, Frazier...just a buncha characters interacting in clever interactions.


----------



## Ralph Rotten (Sep 24, 2018)

Here is something else that I started doing a few books ago.
See, the deal with characters is that they are thinnest at the beginning of the book because you just met 'em.
Even if you graphed them, and outlined them, it's like Agent Smith reading a file on Neo; Smith knows a lotta facts, but doesn't really know Neo.
In writing you don't know those characters, really know them, until you have written those first hundred pages and they come alive.

So what I do is write those first hundred pages, then doubleback to the beginning and apply that knowledge to the characters where they are thinnest.
Fill them out, paint them so the reader can see them as well as I can.


----------



## Guard Dog (Sep 24, 2018)

Ralph Rotten said:


> Here is something else that I started doing a few books ago.
> See, the deal with characters is that they are thinnest at the beginning of the book because you just met 'em.
> Even if you graphed them, and outlined them, it's like Agent Smith reading a file on Neo; Smith knows a lotta facts, but doesn't really know Neo.
> In writing you don't know those characters, really know them, until you have written those first hundred pages and they come alive.
> ...



Man, if you could see the grin on my face right now...

That's exactly what I've been doing. Because the one thing I've recognized is that not only have I gotten better at putting these people "on paper", _they_ have gotten better. So I have been taking that and carrying it - both their improvements and mine - "up stream".

So, with you pointing the way, when I do another "comb-over" editing job, things are gonna get a lot more from me than just spelling and grammar checking.



G.D.


----------



## Ralph Rotten (Sep 24, 2018)

I actually do doublebacks later in the story, just to keep it on trajectory. It's easier to fix 100 errant pages than an whole book. 
So I go back over new stuff every hundred pages or so, tighten it up, firm up the characters, and make sure it is heading where it ought to be.
_Damned characters are always trying to hijack the story._


----------



## Guard Dog (Sep 24, 2018)

Ralph Rotten said:


> 1) Characters
> 2) Characters
> 3) Characters
> 4) Great interactions between those characters
> ...



Well I dunno what the hell goes in 5 and 6, but I'm pretty sure I've got the others covered.
And if I don't... I'm guessing it's way past too late now, after 30+ years of working on it. 

When I first started looking around for places to learn about this stuff, what I wanted more than anything was to find people and places that could teach me about the mechanics of writing... the technical aspects of it. Because I was pretty sure I either had the concept and story I needed already, as well as the characters, or I'd never find it/them.

And so far, I've only been encouraged by what I've been told, and had my original ideas about this stuff reinforced.

It's about time I was right for a change. :icon_cheesygrin:




G.D.


----------



## Guard Dog (Sep 24, 2018)

Ralph Rotten said:


> I actually do doublebacks later in the story, just to keep it on trajectory. It's easier to fix 100 errant pages than an whole book.
> So I go back over new stuff every hundred pages or so, tighten it up, firm up the characters, and make sure it is heading where it ought to be.
> _Damned characters are always trying to hijack the story._



Again, that's something I do on a regular basis. In fact, I've spent the entire weekend on just that.  ( Well, that and screwin' around here. )


By the way, I'm feelin' a little guilty at this turning into a "Give the Guard Dog a lesson in writing" thread... I really only posted what I did as an example of how this A.I. stuff works in a story, and what it's usefulness really is.




G.D.


----------



## Guard Dog (Sep 25, 2018)

TL Murphy said:


> ...I like the original definition of intelligence as something that can't really be artificial, even if it's mechanical. Sentience and higher abstract concepts indicate true intelligence beyond mere calculation.



Thanks, and yeah, it's always surprised me that in all the discussions I've been involved in, all the articles, essays, etc. that I've read on Intelligence, artificial or otherwise, very few people even bother noting that fact.




TL Murphy said:


> I don't think we have the ability to build that kind of intelligence yet. But the current exponential rate of technological development indicates that we may be able to in the forseable future, particularly when it comes to quantum computers which are potentially light years beyond current computers. There is also the "internet" effect. I don't  think anyone could have predicted 50 years ago, the potential created by the internet, or ultimately, the potential of interconnecting every computer on the planet.



No, I don't think we'll be creating any sentient machines any time soon either. Certainly not the way people are currently trying to go about it.
But that's not to say they won't get close enough, in the next 20 years or so, to fool the average person. Or for them to be really dangerous, in any number of ways. And no, I'm not talking about Terminators... I'm more thinking along the lines of them handling self-driving cars, managing traffic grids, etc.
Can you imagine the potential chaos if the FAA decided that instead of flight controllers, they'd let an A.I. run things? And it suddenly has a "Very Bad Day"?  No thanks.



TL Murphy said:


> Orson Scott Card created a sentient character that lives inside the galactic internet. It's through this character that instantaineous galactic travel is possible but no one understands how it works. Also in the novel "The Swarm", single cell creatures living in the deepest parts of the sea are able to fuse in a way that creates a super intelligent being. The idea of hive-mind , whether organic or mechanical, is difficult to grasp but may be the key to super-intelligence.



I don't know about a Hive Mind, but I do know a bit about Swarm Technology... And it's really not all that complicated, other than in the math involved. ( Which current computers are very good at. )
It's pretty much just spatial geometrics/spatial awareness, Fluid Dynamics, and Pattern Recognition. Which is all the sort of things used in modeling weather systems, CGI armies in movies and such...
And like most things, I'm sure there are people out there that can find all sort of other unpleasant uses for it.

One way or the other, I'm sure the future is likely to be a lot more interesting than anyone imagines, just based on the things that I've seen come along in my own lifetime.



G.D.


----------



## Ralph Rotten (Sep 25, 2018)

Guard Dog said:


> Well I dunno what the hell goes in 5 and 6, but I'm pretty sure I've got the others covered.
> And if I don't... I'm guessing it's way past too late now, after 30+ years of working on it.
> 
> When I first started looking around for places to learn about this stuff, what I wanted more than anything was to find people and places that could teach me about the mechanics of writing... the technical aspects of it. Because I was pretty sure I either had the concept and story I needed already, as well as the characters, or I'd never find it/them.
> ...





I left those spaces blank as a metaphorical way to illustrate the importance of #s 1-4. 
These days you simply cannot write a good story; it has to have great characters at the core.
Too often I read aspiring writers who focus on the story at the expense of the characters.

Lemme give you an example: why is reality TV so popular?
After all it's the basest of media, the lowest of low, intellectually dim content.
But it is popular because it is full of characters.



Anyhow, sorry to hijack your thread on AI.

What were we talking about?


----------



## Guard Dog (Sep 25, 2018)

Ralph Rotten said:


> Lemme give you an example: why is reality TV so popular?




Damned if know... Especially since there's no "reality" to it. :dejection:

Edit: Let me just add that I suspect the popularity of "Reality TV" has something to do with that old Statician's joke "70% of people have a below-average I.Q." not being so much of a joke as it once was.


G.D.


----------



## JustRob (Sep 25, 2018)

G.D. I for one am disinclined to follow this thread if you persist in indulging in such gratuitous expletives.


----------



## Guard Dog (Sep 25, 2018)

JustRob said:


> G.D. I for one am disinclined to follow this thread if you persist in indulging in such gratuitous expletives.



Apologies, Rob, but I'm in a lot of pain today, due to the weather aggravating my neuropathy in my neck and shoulder.
And I tend to type exactly what I think, when in this state.
However, the expletive was not gratuitous, in my opinion, since those 4 words have no doubt left all who read them with no doubt as to exactly how I feel on the subject of "reality TV".
I will go make the appropriate edits, to spare anyone's sensibilities... even if I don't personally agree with those sensibilities.




G.D.


----------



## JustRob (Sep 25, 2018)

Damned decent of you G.D. and as for "reality" TV, I'm right with you there in spirit no matter how it is expressed. However, I consider the most artificial TV to be the cookery contests. I mean, you can't smell or taste the results so you are expected to sit and assume that the judges know what they are talking about without being able to judge for yourself. What a waste of air time. In fact I'm appalled at the way that nowadays nothing is considered entertaining without the introduction of an element of conflict into it, cookery dancing, singing, gardening, whatever. Mind you, a series entitled "Sermon Wars" might increase viewing numbers for religious programmes.

P.S.
My angel suggests that the winner should receive a contribution to the appeal to save their church steeple.

Come to think of it, an AI would probably have kept to the original topic better than we are. That must prove something though.


----------



## Guard Dog (Sep 25, 2018)

JustRob said:


> Mind you, a series entitled "Sermon Wars" might increase viewing numbers for religious programmes.



"Sermon Wars - The New Crusades"?  
If they'll make religious leader fight with sword and shield, I might watch that... 

...especially if the loser gets fed to the lions. :thumbr:


G.D.


----------



## JustRob (Sep 25, 2018)

Guard Dog said:


> Apologies, Rob, but I'm in a lot of pain today, due to the weather aggravating my neuropathy in my neck and shoulder.



Commiserations, but my angel remains angelic even when her trigeminal neuralgia is persecuting her. I understand that the best cure for it is suicide as the trigeminal nerves connect directly into the brain and it is very difficult to control them by any means.


----------



## Guard Dog (Sep 25, 2018)

JustRob said:


> Commiserations, but my angel remains angelic even when her trigeminal neuralgia is persecuting her. I understand that the best cure for it is suicide as the trigeminal nerves connect directly into the brain and it is very difficult to control them by any means.



Well, I've never been "angelic" to begin with, so feeling like I've got someone shredding up muscle tissue with a hot knife and electrified knitting needles doesn't improve my already-questionable disposition in the least.

Also, there is no cure for this stuff, nor any effective treatment. 70% of people doctors try to treat don't respond at all, and the other 30% either have exceedingly little improvement, or more often, a negative reaction. 

And I'm pretty sure I'm the poster-child for negative reactions to medications. It certainly seems that way, anyhow.

By the way, I have an unnaturally high tolerance to pain. Things that stop most people dead in their tracks don't even tend to slow me down. But on a really bad day, this stuff leaves me unable to even get out of bed.

Which is why I'm "Medically retired", at only 55 years old.



G.D.


----------



## JustRob (Sep 25, 2018)

Guard Dog said:


> And I'm pretty sure I'm the poster-child for negative reactions to medications. It certainly seems that way, anyhow.



Yes, I've been there. When I was prescribed tranquillisers as a young man they made me so hyperactive that I had to be taken off them. 

Getting back to the topic, I can see your interest in transfer of the human mind into an AI then. If only it were that easy.


----------



## Guard Dog (Sep 25, 2018)

JustRob said:


> Getting back to the topic, I can see your interest in transfer of the human mind into an AI then. If only it were that easy.



Well, although I certainly wouldn't mind a new, pain-free android-type body to get around in, my interest in neurology and the human brain began much longer ago than my own medical problems.

I've always been curious about what exactly makes us who and what we are, and causes us to behave the way we do. And most of that comes from growing up with, and being raised by... crazy people.



G.D.


----------



## Guard Dog (Sep 25, 2018)

Concerning mind transfer, I'm of a mind that it's simply not possible. Nor do I believe Star Trek's transporters do anything more than kill a person, then build an exact duplicate elsewhere.

After all, if you make an exact copy of something, somewhere else, don't you just end up with two of the same thing? Even if you destroy the original? 

To me, that's nothing more than taking a pair of identical twins, and making sure that their brains, engrams, or whatever are and stay identical... even though that's not possible due to their different physical perspectives of the world.

So, an exact copy of my mind, even inside a computer, is still just another me.

Make sense?




G.D.


----------



## Ralph Rotten (Sep 25, 2018)

You ever read any Corey Doctrow?


----------



## Guard Dog (Sep 25, 2018)

The name isn't familiar, so I guess not.




G.D.


----------



## JustRob (Sep 25, 2018)

It's certainly a philosophical subject, the nature of identity I mean. It was also a fundamental issue in my working days as a computer systems designer. If our company received two identical copies of the same application for life assurance, then what did the client actually want, two policies or just one? Was it one application duplicated or really two?

How do you know that you are dating identical twins? Kill her and bury her in the back yard and wait to see if she turns up on the next date. Identity is fundamentally about life history, i.e. when an entity comes into existence, what it experiences and when it ceases to exist. As soon as you clone a person they have different experiences and therefore are different people. 

So far as a unique "person" is concerned, one has to consider the nature of the psyche or soul. In the Riverworld books there was a unique _ka_ which associated itself with anything that it recognised as that person's body. Hence in the Star Trek transporter situation the one remaining version of the physical person would acquire their _ka _by the same reasoning. The resurrection of Spock employed a similar concept. Some researchers into psychical phenomena maintain that the soul is an information structure that resides in the zero dimensional space within which our space-time reality was created during the big bang.

In the old Choctaw culture a person had two spirits, one which would go on to the happy place on death while the other remained with the body to settle the person's affairs. Hence the Choctaw were very careful about where a person's remains were kept. Yes, it's ultimately all about philosophy.


----------



## Guard Dog (Sep 25, 2018)

JustRob said:


> ...Yes, it's ultimately all about philosophy.



With the real problem being... what is the truth of the matter? Because it's quite possible - nearly guaranteed, actually - that no one knows or even has the faintest of ideas.





G.D.


----------



## Guard Dog (Sep 26, 2018)

Thinking further on the idea of mind transfer, and the likelihood that it just wouldn't work, I suppose the next best thing would be something along the lines of the movie Anita talked about several posts back; brain transfer. Simply put the entire organ in an artificial body.

Of course, then you'd end up with something like the Borg Queen from Star Trek... and would not only have to overcome the difficulties of working out an interface for millions of nerves with the body, but also of finding the right mix of "chemical soup" to keep the brain alive, healthy, and functioning properly.

Still, that's bound to be easier than figuring out everything else required to duplicate a human brain in it's entirety.

Throw in the mythical "Nanites", and get them to replace all the cells in that brain with a non-biological substitute over time, and you may just have the one way that would actually work, to "transfer" a human mind to a machine.

Though it would be taking the long way 'round, and require the perfection of several other technologies first.



G.D.


----------



## Ralph Rotten (Sep 26, 2018)

If you wanna discuss transferring consciousness from one body to another, watch *Altered Carbon* on Netflix. Season 1 is binge-worthy.
I mention Corey Doctorow because he wrote some stuff on the same concept; people don't die because they are backed up. You just load your backup into a new body and live forever.


----------



## Guard Dog (Sep 26, 2018)

Ralph Rotten said:


> If you wanna discuss transferring consciousness from one body to another, watch *Altered Carbon* on Netflix. Season 1 is binge-worthy.



Oh, I've seen it. Watched the whole thing with only bathroom, booze, and coffee breaks.
Can't wait for more. :thumbr:



Ralph Rotten said:


> I mention Corey Doctorow because he wrote some stuff on the same concept; people don't die because they are backed up. You just load your backup into a new body and live forever.



I'll have to look him up then.

Also, given the number of people I find to be "mentally Constipated", I can't help but worry they might be the ones that'll live forever, due to already being so... backed up. :roll: :devilish:




G.D.


----------



## JustRob (Sep 26, 2018)

Guard Dog said:


> Of course, then you'd end up with something like the Borg Queen from Star Trek... and would not only have to overcome the difficulties of working out an interface for millions of nerves with the body, but also of finding the right mix of "chemical soup" to keep the brain alive, healthy, and functioning properly.



The brain's plasticity would hopefully sort out the nerves provided that they were connected to the artificial mechanisms in a way that gave it the chance to. Survival is a built in function.


----------



## Guard Dog (Sep 26, 2018)

Why I think "Synthetic" or "Engineered" people may be be a reality, some day:

Organic Computing

Keep one thing in mind here... "organic" only refers to chemicals that are found in living things... amino acids, carbon, etc.

All of those things are found naturally, in their raw state. Hell, you can get carbon by simply passing a surface over a candle flame.

Not a real big stretch to think some other chemicals or compounds might actually do a better job then, is it? Chemicals and compounds that don't fall into that "organic" classification?


G.D.


----------

