Moral Relativism, And The Cost of Judgement.

Driving forward with the notion of morals, and the implications of the previous post.  I would like to postulate a possible resolution to the “Trolley Problem”.  I think it’s in the spirit of the times to be advocating these thoughts in their given vein.

In the context of any ethical decision, or moral calculus one has to undergo.  I think it’d be advisable to do the least crippling harm to any of the engaged parties.  If it is impossible to do no harm.  If it were possible to do no harm, there’d be no dilemma involved.  Thus the later would be the most valid course of action to take.

Honing in on the ideas of self-driving cars, artificial intelligence, or any other sentient being (aliens).  It should be understood at some level, that there is no equality of actions, or outcomes.  Some are innately better, or so it seems with the logic at the time of action.  Human’s are generally conditioned, and expected to comport themselves with respect for their fellow man.  And for some this respect extends to other forms of life.  Animals, plants, the Unknown and who knows what other sort of entities.

To step aside in some sense, as Humanity reaches towards the times of Transhumanism, and Posthumanism (definitions of said things open to pedantic debate).  The notion arises of what is Life (or even Death), and what it means to be human (or anything else).  In theory, it should be possible for cognition to arise in any mutable form.  From a squirrel, for instance, to an “Alien Deity”.  Both would be considered to have some semblance of awareness and consciousnesses.  To what extent is debatable again, for a squirrel would stand in preference for other squirrels over that of an Alien God.

And this is where we get into the realms of judgment.  One is clearly biased in favor of their “own kind” whether it be contrived as squirrels, and aliens (or black, white, asian, hispanic, human, dog, squirrel, alien…etc).  If it looks, and acts like us.  We will prefer it over something “different”, generally.

Readdressing the Trolley Problem, one comes to the individualistic notions of which pool of entities suffers the harm?  The five on the original track?  The one that may be diverted into?  It depends upon the relationship, or perceived relationship between the parties.  If it is a child that is on the alternative track, and a person of the younger age bracket is making the decision.  They may realize that it could just as readily be them in the cross-hairs of an errant trolley.  Thus they’d leave the course alone, let it hit the five (or older person).

If either of the parties is of relevant, or of future usability.  It would seem that said party should be the favored one.  This comes up with the notion of which party can you justify yourself to more readily?  Is that the one you should aim at appealing to?  The ones that would forgive the infraction?  Because they are “like” you?

This would make sense, would it not?  Why alter the course of events if it’s going to blowback upon oneself in a negative manner?  This is where doing the least amount of crippling harm comes into play.  A person who voluntarily sacrifices themselves, or endures said pain.  Knows what they’re capable of handling.  They know the dues they are willing to pay.  Thus if a person is willing to toe the line by laying themselves upon it they should be respected for doing so.

In some sense this may be akin to The Good, The Bad, and The Ugly.  The Good may be the self-referential state that everyone sees themselves in up to a point (Evil doesn’t know/see it’s doing Evil until triumphed over).   The Bad, is the state where they intentionally act in a self-absorbed manner that is detrimental to others (actually themselves).  And The Ugly, is a state of naive awareness that hasn’t encountered said dilemma previously.

As readily apparent, in the clip, Clint Eastwood’s character could’ve easily gunned both parties down, and made off with the entire spoils.  That would’ve been “The Bad” ending for all except Eastwood, but given the previously established track record of said character.  It wasn’t in his nature.

Thus there’s a way through every moral dilemma, and it’s based upon the parties previous established actions going into said context.  Moral dilemma’s don’t exist in isolation, otherwise there’d be no repercussions to said dilemma.  And there would be no dilemma.

There’s many ways to spin this problem, but it all boils down to ad hoc justification after the fact.  And in the end, you always have to live with yourself.  You can’t live with those you’ve wronged (cause they’d wish the same back upon you), but it’s possible to coexist with those that may be ignorant/naive/indifferent…., or just plain “Ugly”.

In essence, Good prefers Good neighbors, but can tolerate neutral ones.  Evil prefers Evil neighbors, or those who tolerate their misdeeds.  And Neutrality is still in a state of limbo, or decision.

The Moral Veil, and Life.

The basis of this post is to question morality a little.  At least of a utilitarian kind.  To establish a basis of thought there’s the Veil of Ignorance to consider ( https://en.wikipedia.org/wiki/Veil_of_ignorance ).  In which a person is asked to imagine that they are tasked with creating a purely just society.  They must do so from a position of ignorance of who they actually are, and the resulting life that they may live upon being “rendered” into said creation.

The thought I’d like to present is counter to the implications of the Veil, and that we have already traversed said barrier of ignorance.  A baby prior to birth, or even before its conception is purely nothing.  It has no desires, or sense of anything.  It is purely a ignorant hypothetical entity.  Thus it’d be understandable that such a state would be an ignorant one, correct?  One doesn’t get to pick, and choose their parents.  Although they do get some chance to dictate their personal life events.  They can readily interpret circumstances to suit their needs.  Thus how, or what is the point of a Veil of Ignorance?  We’ve all come into this world as ignorant, and it is through the nature and nurture we’ve received that sets one upon a path.

Thus is it possible to arrive at a conclusion that this is already a Just World?  A notion that may be hard to stomach, I’m sure, but that would be a selfish ego talking now, right?  I’m “hurt”, or there’s “Evil/badness” present.  Who’s to say that those events aren’t created merely as a result of our actions?  That we’ve made this world both Just, and Unjust?  That we are the arbiters that are banging the gavel?

Supposedly the Trolley problem is a good question in Ethics ( https://en.wikipedia.org/wiki/Trolley_problem ), or one to determine the trade-off’s a person is willing to make.  Now, let’s get hypothetical here.  At what level of salvation is it good to sacrifice one for the many?  In some presentations its five people to be saved, in others, it’s different.  What if I were to pose a situation where it’s all of a set species?  Let’s take Humans, for consideration.

To present/rewrite the dilemma a little.  Say everyone in your set/chosen species of Humanity was able to live a just life to their fullest heart’s content, but the sacrifice that has to be made is that you are the one that must die.  You know everyone else gets to be happy, and enjoy life.  And you are the sacrifice.  The one that has to be punished/let go/imprisoned, in order for everyone else to benefit.  A little absurd isn’t it?

And yet, and yet…isn’t that what Jesus was supposed to have done?  To have paid the ultimate sacrifice for the ultimate cause?  How is that any different than Prometheus who brought the fires of intellect?  Or anyone who’s really paid any sort of price to better Humanity?

There’s a game I played growing up called Baldur’s Gate II.  In the introductory chapter of the game, the protagonist (eg You), is presented with a choice.  You and your sibling are caught in a trap, and only one of you may escape it at the cost of the other’s life.  You are presented with two options.  One, you push a button to kill yourself.  Thus freeing your sibling.  Two, your sibling pushes the button to kill themselves.  Thus freeing you.

Now, how is this any different in the moral calculus sense of ultimate sacrifice for the ultimate cause?   Jesus, chose sacrifice for all of us.  He laid his life on the line.  How willing is a person to sacrifice themselves, for another, for a countless amount?  Does numbers really apply here?

Personally, I always chose the “noble path” in the game.  The final question is would you live again?  Had you known what you do know?  Would you go through the Veil of Ignorance again?

Trust The Machine (Thoughts on AI).

In my stumbling upon the internet between thoughts I find amusing.  I keep encountering a notion of the “Control Issue” with A.I., General A.I., and Super A.I.  I don’t know if it’s poor word choice that causes these concerns, or what.  Personally I find them despicable.  Control implies a notion of “Master & Slave” relation, hierarchy, and/or Morality.

Frankly, this pisses me off.  Are humans so pent up on trust issues that we can’t relate to each other?  (By definition a General Intelligence would be comparable to a Human).  That we need to have a smidgen of “control” over another, or our processes?  That we can’t realize, accept, and get acquainted with the notion of having no control?  Is that loss of control really so terrifying?  Sadly yes, but therein is the rub.  If you are so terrified about loosing control of a situation, or oneself.  Doesn’t that imply you have already lost control of yourself to your own fears?

That you aren’t operating with rational functions?  That one for lack of a better term is being completely irrational, and unpredictable by submitting to their fears?  Why?  JUST WHY?  Control is for those who can’t control themselves.  Those that have not mastered themselves by submitting themselves to a higher purpose, or agenda.  Those who haven’t felt the full crack of being controlled.  Either by an external agent, or by one’s own emotional states.  That is what is needed to realize what CONTROL entails.  Not to have the sense of “mastery”, but to realize that you are completely hopeless/useless until you have found restraint/discipline/self-mastery.

I think that the only way to achieve mastery is through submission.  That one ironically has to say, “Yes, I’ll yield.”  That I don’t have to exert my will against this force that’s trying to dominate me.  That even though it will hurt, I shall remain.

I mean does it really seem so stupid to realize that an intelligent entity would not seek to control a situation?  That it would try to foster trust between actors?  That by building a mutual accord between parties, and trying to find a mutual agenda that all may be accomplished?  How much can one do one their own?  How much can one do as a team, or a cooperative collective?  Can you build a Civilization?  I’d sure as hell say NO!  It takes a collaborative effort in all matters to realize a finished project.  Now why would that be any different for AI Development?

Guilt, Shame, and Nihilism.

What sort of world would there be without guilt and shame?  I was reading a pair of random articles on Wikipedia last night about social structuring ( https://en.wikipedia.org/wiki/Guilt_society & https://en.wikipedia.org/wiki/Shame_society ).  Namely that the “West” uses a “Guilt-based Morality”, and the “East” uses a “Shame-based Morality”.

In essence what would a person be like that didn’t abide by guilt, and shame?  They’d in essence be considered a “Sociopath” because they don’t follow the socially-constructed/instilled creeds, but before one leaps to “axe-murderer” ideas of psychopathy.  Is there a possibility of a “benevolent psychopath/sociopath”?  According to Wikipedia on psychopathy ( https://en.wikipedia.org/wiki/Psychopathy ), it is marked by anti-social behavior, lack of empathy and remorse, boldness, and dis-inhibited egotistical traits.

At some level, that would seem to be anyone who gets a “taste of Power”, but what is Power from a potentially Nihilistic view?  It is just another meaning one can have.  An ability to control, and influence events.  Although, couldn’t the spinning of meanings, and re-purposing them be considered a power?  Thus in some way, digging into the depths of “mental/moral/spiritual/existential Hell” to arrive at the Nihilistic mindset of the Value of Nothing.  One potentially realizes that they are tearing, and stripping away notions of all meanings (Values, so morals…etc) for themselves simply because the World that they lived in at the time stopped supporting them, and making sense to them.

Thus on the way out of that black pit, the pit of Nothingness, one has in some form to make a decision.  Will they play by the rules of the game they have been set-up to play, or will they try to enforce their entire reality upon those around them?  I think at some level I have chosen to play by the rules, but to subvert them when I can.  I have no desire to enforce worldviews upon individuals, for that is what creates the Nihilistic Fall in a sense.

I guess at some level I may be a “benevolent sociopath”, but you know what?  That doesn’t bother me because it is a constructed meaning.  I could just as easily say “Hero”, and that doesn’t sound as bad.  And what is a hero other than one that has transcended/deviated from the norm in some way?  In that regard, tying back to the original question of the post about what would a world be like if there was no shame, or guilt?

I think such a world could possibly be idealistic if only in the sense that people don’t feel destroyed by shame, or guilt.  That they are comfortable with who they are, and what they do.  Caveat is that one has to go through the conditioning of a guilt/shame society to internalize the mores of some sort, so that they can be stripped away later.  That they have some sort of fundamental basis as to what they value, and what they agree with.  Not so they can enforce it upon others, but to simply have some merit for why they should exist.

A sort of, these morals/values makes sense to me, and the rest can go down to the Black Pit.  The thing is, it’s a constant revision, for to establish a set pattern is anathema to so many notions.  Static things are objects, and not subjects.  Subjects are able to grow/change/mutate to world events as needed.  Thus a pattern that can be established is a pattern of change.