It is absolutely imperative that we not have androids
- Topic Archived
You're browsing the GameFAQs Message Boards as a guest. Sign Up for free (or Log In if you already have an account) to be able to post messages, change how messages are displayed, and view media in posts.
Actually, the background of this and the premise of my reasoning against it has to do with two things. One, the general confusion in mankind about what constitutes an intellect and the power of reasoning (i.e. that it is a spiritual faculty and not a material one that can be possessed by non-spiritual creatures); and two, the vegetarian and especially Buddhist notion of the suffering of animals and therefore the wrongness of eating animals, when I hold that animals are not capable of true suffering since I hold they have no consciousness as we can understand it but are merely functioning on a sensitive, not intellectual/spiritual, level of operation.
But evidently such a sympathy on the part of man towards animals, although it is good that man not be cruel to animals (this would betray an interior aspect of cruelty in the man himself), can nevertheless be taken to far if a man begins to believe wrongly about animals.
And also I suppose there are great dangers in believing wrongly about the nature of the intellect, that it be material when in fact I hold that to be impossible.
So combining the two, a society can be envisioned where, due to a lack of understanding of the intellect and also the inordinate sympathy man can have which clouds his reasoning, he can begin to suppose that androids not only have rights per se, but also are valid companions and proper objects of his affections, especially of his love. Now this would evidently be an empty, egotistical demise of a man, disposing him towards friendship with that which is not nor can ever actually be his friend. I suppose that the barrier between what is man and what is not man would continue to break down, as evidently it has been doing, and continue to degrade man into that which is lower than himself.
I am not interested in a dog-fight over the above principles, and present them only to be clear about what I'm getting at, and am more interested in seeing where other people stand on this issue.
I guess the first question would be, what constitutes an "android"? Is it strictly a machine capable of human thought, or would an actual person who's been "upgraded" count as well?
Well, there is a new accent of n00b language. It's called: Vet LUEser goes Foreign!-MegaSpy22
Those must be the pants of the gods!-Digitalpython
Good point, definitions are important. Well, one of the premises I'm working with is that no machine is or can be capable of human thought, inasmuch as I am holding that the intellect is a spiritual faculty not derived nor derivable from matter. So the first definition is out.
A person who was "upgraded" so-to-speak, is what I would consider a cyborg, or maybe bionic man, and, although I'd certainly caution somewhat against that for other reasons, it doesn't bear on my current thought.
What I mean by android is simply a humanoid robot. Actually I'm rather opposed to robots in general, but inasmuch as a humanoid robot is even more inclined to draw sympathy from men, it is by far more dangerous than just any old robot.
One quick and dirty way of looking at thought being a spiritual and not material process, without attempting a metaphysical argument, would simply be by looking at common sense. Now common sense is not the foundation of philosophy but it should jibe with whatever philosophic knowledge proposes as true.
But simply, if all things are material (i.e. non-spiritual, thus matter, energy, anti-matter, etc...), and all material things are governed by scientific laws, and nothing which functions under a law is free, then, all thought is material, therefore it is governed by laws, and is not free. But that we do not have free will, is opposed to common sense (and good philosophy). Furthermore it's an intolerable conclusion that no one actually holds to, except orally.
Evidently, our mind must be something outside the constraints of the laws of science, i.e. not material. Following on that, it would be an absurdity to suppose we could create a robot with an actual intellect or free will, inasmuch as something spiritual does not come about from the slapping together of various material parts, as these two things are in utterly different orders. (Also this is why I would say that the belief in the "necessity" of alien life, based on size of Universe and so-called probability theorems, is, likewise, an absurdity; but that's a side issue)
Also like I said I am not looking for people to debate or have to really justify their answers, so no one be afraid to simply say what they think with a line or two, as though you would have to get embroiled in a discussion on the matter.
An interesting topic.
Seeing as my PhD is in experimental psychology, it shouldn't surprise you that I disagree that cognition is a "spiritual faculty", and that people have free will, and with all other such attempts to construe behavior as supernatural (whether it's the behavior of humans, animals, or computers). Such hypotheses about the supernatural are unfalsifiable, and hence antiscientific. That is, if you're willing to even entertain these ideas, then the whole idea of the empirical study of behavior becomes a non-starter. What you call an "intolerable conclusion" is not just widely believed, but a foundational assumption of social science.
Actually I'm rather opposed to robots in general
Even, like, a robot arm in a factory (the sort of robots that industry makes enormous use of today) or just a robot that's smart enough to, e.g., hold a conversation?
Have you ever stopped to think and forgotten to start again?
I did reckon in advance that you would hold a differing view on the will and thought, of course. As to my nebulous "robots in general" statement, I meant only a spread of varying levels of apprehension over the idea of robots. The criteria is largely how they will affect humans, especially by causing them to function less like humans ought to function (i.e. normal social behavior, which perhaps is another can of worms for you, I'm not sure; or normal prioritizing of the importance of one thing over another). So in proportion as these robots draw sympathy, which would seem to be one of the greatest ways of disturbing human society and values, I would be more apprehensive about them.
Thus robot arms are basically a non-issue, and androids are the extreme opposite end. Robots in the middle, as they move towards android, or as they move towards robot arms, would be graded appropriately of course!
Short of a cataclysmic event of some sort, I only see science and technology moving forward. If it can be done, it will be done. We might be able to delay it a little, but not by much in the grand scheme of things. So I think that androids are coming regardless. The question is a matter of who will make the androids and what their intentions are. We could be pretty screwed depending on who decides to make them. Things could obviously go wrong with bad intentions, but even with good intentions, we might make something dangerous. But I don't think that holding out on humanity not to do it is viable. It's sort of like a stray square foot of untouched snow; once enough people realize it is there, someone is going to go after it.
Fame is but a slow decay.
Add user to Ignore List after reporting
- Topic Archived