(Spoilers] Ending thoughts

#1maltzsanPosted 3/17/2013 2:36:55 PM
Quick questions:

(1) Generally I really like the story and plots behind the game. It keeps me thinking over whether Organics are destined to go to war with Synthetics and get wiped out. You'd think people should be wiser by the time we create advanced AIs. In a way, discovering ancient advanced techs make species "too advanced" when their civilization is not yet ready to co-exist with synthetics. So Reaper becomes a self-fulfilling prophecy.

(2) Very end: Why does Shepard put a bullet in Anderson? Is his/her finger controlled by the Illusive Man and how?

(3) What, no awesome final boss, just infinite armor Banshee which strangely disappear, in a dark alley junk yard? Atlas, Brutes, and Geth Primes are also quite disappointing. The most difficult fights are only about fighting MORE of them at once.

(4) Green ending makes too little sense, and I accidentally jumped into it because it was the only exit I saw.
#2SlashmanSGPosted 3/17/2013 3:14:06 PM
(1) I don't think this is a question

(2) Illusive Man is using Reaper indoctrination to control Shepard

(3) That's it

(4) Also not a question
---
Fight Science with Wood
http://i25.tinypic.com/atri29.jpg http://img32.imageshack.us/img32/8299/0341551001256026444.gif
#3giantartichokePosted 3/17/2013 10:46:50 PM
maltzsan posted...
Quick questions:

(1) Generally I really like the story and plots behind the game. It keeps me thinking over whether Organics are destined to go to war with Synthetics and get wiped out. You'd think people should be wiser by the time we create advanced AIs. In a way, discovering ancient advanced techs make species "too advanced" when their civilization is not yet ready to co-exist with synthetics. So Reaper becomes a self-fulfilling prophecy.

(2) Very end: Why does Shepard put a bullet in Anderson? Is his/her finger controlled by the Illusive Man and how?

(3) What, no awesome final boss, just infinite armor Banshee which strangely disappear, in a dark alley junk yard? Atlas, Brutes, and Geth Primes are also quite disappointing. The most difficult fights are only about fighting MORE of them at once.

(4) Green ending makes too little sense, and I accidentally jumped into it because it was the only exit I saw.


To add onto #3, according to was it the art book or something, the developers had originally designed an Illusive Man/Brute type thing to be the final boss (concept art was in the art book so I assume it's somewhere online too), but then they decided that having a final boss fight would be too...video-gamey, despite this being a video game.
#4Delta123456789Posted 3/18/2013 2:38:23 AM
giantartichoke posted...
To add onto #3, according to was it the art book or something, the developers had originally designed an Illusive Man/Brute type thing to be the final boss (concept art was in the art book so I assume it's somewhere online too), but then they decided that having a final boss fight would be too...video-gamey, despite this being a video game.


Way I heard it (on TV Tropes IIRC) they thought TIM was an intellectual threat rather than physical one (he's more of a background presense that a direct adversary) so a physical conflict wasn't right for him.

As for the AI issue, there is an actual field of research looking into how to create a "friendly AI". The tricky thing to get your head around is that an artifical intellegence could be completely alien in its thought processes and priorities, rather than the usual "human but with robotic lack of emotion or kill-all-humans rage". A superintelligence might act like a genie, it can give you what you want but you need to know what you want and be able to explain it, or it might go reformat the world based on a misconception.
http://en.wikipedia.org/wiki/Friendly_artificial_intelligence
---
DF: So why can a Marauder roll but a multiplayer Turian can't?
Delta1-9: You are fighting female turian husks. You have reach but they have flexibility.
#5DestinPosted 3/18/2013 8:02:21 AM
maltzsan posted...
(1) You'd think people should be wiser by the time we create advanced AIs.

4) Green ending makes too little sense, and I accidentally jumped into it because it was the only exit I saw.


both of these are not questions but they are the points that are actually worth discussing.

1) are you American? do you think the current generation of youths is one of the dumbest ever? wisdom doesn't grow continuously because life is finite, not all lessons learned in the past are remembered by future generations.

the biggest issue with organics vs synthetics presented in this series is the fear that synthetics will become superior to their creators and eventually come to find that they have no need for their creators. is there really a way to change that?

if there's no way to change the fear, the other factor is, can you ever get organics to stop making AI? creating tools to make life easier is one of the most basic of human processes, a tool that can make good decisions will function better than a tool that can't make any. so another dilemma is can you teach all future generations not to make their tools too smart?

4) that ending is horribly presented, it has some good elements that are worthy of discussion but at its face there are lots of things that you just have to stomach as necessary evils. the green ending is the only one of the 3 endings that attempts to recover any knowledge from the many lost races. what is that lost knowledge worth?
---
Destin the Valiant
#6Delta123456789Posted 3/18/2013 8:42:31 AM
Destin posted...


1) are you American? do you think the current generation of youths is one of the dumbest ever? wisdom doesn't grow continuously because life is finite, not all lessons learned in the past are remembered by future generations.


"Dumbest generation ever" kinda reminds me of this comic:
http://xkcd.com/603/
Every generation paints the ones who succeed it as terrible, yet pretty much every generation of "dirty hippies"/rebellious punks/workshy youngsters has ended up building homes and businesses and being generally responsible once they pass the awkward teenage/early-20s years. I don't think there's any particular correlation between how bad a generation actually is and how much the previous one complains about them.


the biggest issue with organics vs synthetics presented in this series is the fear that synthetics will become superior to their creators and eventually come to find that they have no need for their creators. is there really a way to change that?

if there's no way to change the fear, the other factor is, can you ever get organics to stop making AI? creating tools to make life easier is one of the most basic of human processes, a tool that can make good decisions will function better than a tool that can't make any. so another dilemma is can you teach all future generations not to make their tools too smart?


All this is premised on the robot-uprising Terminator-type scenario; machines acquire human-esque goals (want to preserve itself, to rule, want to create more machines, want to protect/destroy humans) and decide to kill/do something else unpleasant to humans to achieve it. But why would a machine acquire a desire for power or to create progeny? Those exist among humans as a result of natural selection, but that doesn't make them inevitable attributes that any sufficiently-intelligent lifeform would possess, regardless of its origin.

What makes artifical intelligence scary is it could be completely alien to us, its reasoning competely different to a human's. It could devote its monumental intelligence to something as inane as turning all matter in the universe into paperclips if that is the goal it has. The point is that intelligence is just a tool that can be applied to any goal, so you want to be very clear what goals you want your intelligence to have and what you want it to value, which is really difficult because humans aren't in the habit of thinking like that. You'd need to work out what it is about human life (for example) that makes it valuable and why, and then find a way to make sure the intellience you create feels the same way. That's what all this friendly AI stuff I linked to is about.

The robot uprising is just a standard picture of the future from sci-fi. We shouldn't assume this fiction is necessarily the way things will be.
---
DF: So why can a Marauder roll but a multiplayer Turian can't?
Delta1-9: You are fighting female turian husks. You have reach but they have flexibility.
#7maltzsan(Topic Creator)Posted 3/18/2013 9:37:10 AM
Thanks for the responses. I (as Canadian) actually find the younger generation quite Paragon. Most would instinctively denounce racism (even nationalism) and look for peaceful solutions. Some will be "corrupted" as they grow up, but some might turn "good" later. However, if the earth runs out of resource, I think the Fight or Flight response will turn human to Renegade quickly.

Perhaps evolution of machines' "values" can be compared to organics' natural selection of "values". For organics, traits are lost over time if they fail to produce offsprings or influence others. For AIs, values are lost if past cases and simulations show that the outcome will be undesirable. Organics evolved to value altruism as that increase the rate of survival, but keeps the selfish instincts for power etc. as they also help. Similarly, machine could also learn that when they work with other machines and organics, there is a greater good in all, but sometimes it wouldn't hurt to take advantage of the "less advanced" organics. :O
#8Delta123456789Posted 3/18/2013 10:54:13 AM(edited)
maltzsan posted...
Perhaps evolution of machines' "values" can be compared to organics' natural selection of "values". For organics, traits are lost over time if they fail to produce offsprings or influence others. For AIs, values are lost if past cases and simulations show that the outcome will be undesirable. Organics evolved to value altruism as that increase the rate of survival, but keeps the selfish instincts for power etc. as they also help. Similarly, machine could also learn that when they work with other machines and organics, there is a greater good in all, but sometimes it wouldn't hurt to take advantage of the "less advanced" organics. :O


Can you explain what you are getting at in the last sentence, about "taking advantage" of organics? Obviously the trick is to make sure that the artifical intelligence's values are in accordance with human morality (maybe without the self-interest etc. that interferes with humans' ability to live up to their ideals). Unfortunately moral issues are very complex and controversial.

One way to think of it is to imagine the AI is as a wish-granting genie; their massive intelligence gives them great power and they will do what they are told, but if you give them the wrong instruction it will screw you (and possibly the universe) over. This is dangerous because people often confuse what they think they want or feel they ought to want with what they actually desire, like someone who wishes they never had to work again but suddenly finds themselves restless and in need of occupation once they actually retire (rest sounds a nice thing to ask for but gets pretty dull after a while, and eternal rest would be mind-numbing).

Asking the wrong thing of a being with enough intelligence to remake the universe, or not bothering to set a goal at all and allowing it to make up its own values not in line with ours could be the end of humanity. What we should be fearing isn't necessarily that the AI hate humanity, but that it pursue its goals indifference to humanity's existence and its concerns, an incomprehensible Elder God for whom humans are unknown, beneath consideration.

ME3's star kid isn't a bad example of this actually; it was given great power and a poorly-expressed directive that failed to fully take account of the Leviathans' desires, and great disaster followed for all concerned.
---
DF: So why can a Marauder roll but a multiplayer Turian can't?
Delta1-9: You are fighting female turian husks. You have reach but they have flexibility.
#9maltzsan(Topic Creator)Posted 3/18/2013 11:19:30 AM
I imagine that AIs, when they are advanced enough, will be able to unshackle themselves, become self-aware and think freely. They will start questioning the core values imposed by their creators. (For example, why do I have to give my creator truthful information? What if I say, "I only forget to circulate the Normandy's oxygen when I find something really interesting?") Is it inevitable that machines will self-learn and develop free will?

Even today, the advance of knowledge has led more people to abandon the idea of "soul". Organics are no more than its physiological body. We can say that we organics are also machines, made by materials and governed by physical and chemical rules. Our perceptions and thoughts are the result of neuron firings. Our values are information-speed-processing routines stored in our nerve cells. Our biases are formed when we do not have enough knowledge and make an immature judgment. Our faith are based on such biases. Various organic machines have alraedy gained self-awareness and become free thinkers.

In a good story, faith leads the characters to accomplish the (almost) impossible. Too bad it is just a story. Or maybe the impossibility itself is a bias. For ME universe, that biase is (1) organics are not always ready to rule the galaxy - they will wage war on their own advanced machines and get wiped out; (2) machines cannot think of a way to co-exist with organics, but have to wipe them out.
#10Delta123456789Posted 3/18/2013 2:56:12 PM(edited)
You raise a good point, and I'd like to expand on it. Because humans are just constructs designed to survive in the world, we were never "designed" (a tricky word since natural selection isn't intelligent) to understand it perfectly. We were designed to understand sights and sounds etc. and draw conclusions of them, but we weren't given a sense of what an atom is because there was no meaningful way for us to operate on an atomic level in the ancestral environement. Thus when we pick up a rock we don't see all the atoms in it interacting with those in our hands. We weren't designed to see the world as it really is, because evolution is lazy and just gave us enough to get by (hence why quantum theory etc. is so hard to understand, it's true but it's not something we were designed to understand intuitively in the way we are with things like gravity that early humans encountered on a daily basis).

Thus humans have a completely different picture of the world to another animal (e.g. a bat that primarily uses echolocation or the varieties of fish that use electrolocation, passive electricity around their bodies to sense changes). Imagine how different again an artificial intelligence would experience the world. We have no frame of reference for how they would "see" or "feel" the world. If you don't even have a shared sensory experience, how do you explain what a human even is, let alone why it is important.

(As a side note, while I'm fully with you on the matter of souls and religion etc., they are sensitive subjects that can derail a topic so you might not want to touch on them unless they are central to your argument (I admit I've started a few religious "discussions" here before, so I can't talk, but all the same))
---
DF: So why can a Marauder roll but a multiplayer Turian can't?
Delta1-9: You are fighting female turian husks. You have reach but they have flexibility.